The Opportunities and Limits of Societal Verification

The Opportunities and Limits of Societal Verification, by Kelsey Hartigan and Corey Hinderstein, makes the case that work done by non-government parties (societal verification) has an important role to play in arms control verification. The article discusses various models for societal verification, its challenges, and how it can be utilized by governments. The article concludes that the best way for societal verification to be used by governments in arms control verification is through the use of networks of outside experts. These experts will serve as “canaries in the coal mine”, whose findings get the attention of government officials that have the final say. The article also makes the suggestion that public (open source) information should also be used by the government. However, because the article doesn’t focus on outside experts, it is vague in discussing important details of how outside experts can be utilized, how they can be helped by the government, and what are potential pitfalls of utilizing them.

The focus of the article is pretty broad. It primarily discusses opportunities for arms control verification that have arose from the popularity of the internet. Namely, a vast amount of data that is important for verification is available on the internet and this data can be accessed by many people not affiliated with the government. This is relevant for arms control since many non-government weapons experts in places like academia can easily find data, such as photos, on sensitive military equipment from traditional and social media, online. These experts can use this online data to discover arms control treaty violations and other important facts.

One example of this that the article mentions is the investigation of North Korean Transporter Erector Launchers (TELs) by the Arms Control Wonk network/blog. This was a case where academics and others outside of the government compared photos of TELs from a North Korean military parade to photos of Chinese TELs from social media to uncover the transfer of TELs from China to North Korea, in violation of sanctions. This transfer was not publicly known until it was discovered by these non-government experts.

This scenario clearly demonstrates that outside experts have important contributions to make to arms control verification. Thus it would be interesting to discuss how outside experts can be helped by the government, and what are possible downsides of using their work. However, the article choses not to focus on these issues and instead discusses seemingly less important topics.

An example of this is the subsection on “Data Management”. The subsection begins with the claim that “it will be essential to develop a framework” for data collection and dissemination in a “consistent, user friendly format”. It only becomes clear what this means when the subsection later suggests “WordPress” (a popular blogging platform and program) as a possible solution for this problem. Thus, it appears to be saying ‘blogs should be used to publicize research’. The rest of this subsection also illustrates another issue I had with the article as a whole: it uses buzzwords seemingly for the sake of using them. Specifically, the subsection adds that “Innovations in cloud computing” and advances in “big data” will help with challenges in societal verification, without discussing these challenges in any depth.

I think it would have been more useful if the article discussed the relationship between government and outside experts in greater detail. In particular there were a few topics related to this, that seem worthwhile exploring, but were not discussed.

One of these is the motivation of the outside experts. Although some outside experts are currently motivated to do societal verification, maybe more research would be done if the government provided incentives for societal verification. These incentives could be monetary, for example, by providing a reward to researcher that discover a sanctions violation. However other kinds of incentives might effectively motivate more researchers as well.

Another topic that wasn’t really discussed is the public nature of the discoveries, and the challenges this poses. Because sources are revealed in societal verification, this allows the offending government to prevent similar disclosures in the future. For example, in the TEL case discussed above, North Korea now knows not to display sanctions violating equipment in photos of military parades, since blog posts containing the pictures have revealed a sanctions violation. However, if the violation were discovered by an intelligence agency using the same sources, North Korea may never learn how their sanctions violation was discovered. Although the article does discuss techniques like censorship as one way governments can frustrate societal verification, it doesn’t really discuss this cat and mouse game aspect of societal verification. — Jonathan

22 thoughts on “The Opportunities and Limits of Societal Verification

  1. I agree with Jonathan that the article could have used more discussion of the potential relationship between the government and these weapons experts, and the potential issue of making sensitive information public.
    Another possible room for critique that I noticed in the article was that, though non-government weapons experts might be “canaries in the coal mine,” I think governments would do well to primarily focus on their own analysis. As the article notes, the internet has provided large amounts of data that was hitherto unavailable. However, I find it hard to believe that governments do not still have a monopoly on the most up-to-date, sensitive information. I would hope that governments, particularly the US government, with its vast military budget, have more resources at their disposal to perform the kind of in-depth data analysis that we need to ensure that international non-proliferation treaties are being adhered to. That being said, the article “Emerging Satellites for Non-Proliferation and Disarmament Verification” indirectly addressed this critique, supporting the idea of having outside weapons experts assist in the quest for non-proliferation. That article discusses how the commercial satellite earth observation industry has sprung up in recent years. Many small satellites that offer high-resolution imagery, and even HD video, now orbit the earth. These small satellites could potentially provide the imagery necessary for more detailed analysis by outside weapons experts as well as governments themselves. Small satellite systems tend to have more “frequent and affordable imagery” than traditional satellites. The article focuses primarily on how this imagery could be used by NPG member states, but I see no reason why this data could not also be used by outside weapons experts.
    In short, I think that governments should not rely too heavily on outside experts, but that relying in part upon these experts is viable, especially given new satellite technology.

  2. In the section titled “The Role of the Government,” Hartigan and Hinderstein discuss the ways that governments can actually interfere with accurate societal verification. One way is by restricting access to certain online sites amid periods of social turmoil, as Mubarak did during the Arab Spring protests. When these sites are shut down, “it can affect the ability of the use to collect information when it might matter most” (8). Another way that governments interfere with accurate societal verification is through censoring internet content, as the Chinese government does in order to silence opposing viewpoints. This type of government censorship “impact[s] the quality and reliability of the information available, and if only partial information is collected, incomplete and erroneous conclusions might be drawn” (8).

    Both restriction and censorship are characteristic of authoritarian regimes, who strive for a stable grasp on power through repression. However, these are the same states that could greatly benefit from the data provided by the Internet and its implications for intelligence gathering. In other words, these regimes’ own insistence on repression inhibits their ability to collect accurate information from their populations that could significantly enhance their international and domestic security. While this may be a long shot, I cannot help but envision how the desire to employ societal verification techniques could actually have a democratizing effect on some of the world’s most repressive regimes by encouraging them to keep websites open and to stop content censorship. One residual concern, however, is the way that these authoritarian governments would employ this information once it is permitted to be uploaded on the Internet, and whether they would respect the privacy and opinions of their citizens or if they would use the information to target dissenters and further repress their populations.

  3. Jonathan discusses two different topics in this blog post: the effects of societal verification on nuclear disarmament and the ethics of utilizing experts as consultants for nuclear verification initiatives. There are positive and negative aspects to both topics. With regard to relying on non-government parties to assist in nuclear verification, Jonathon mentioned the potential effect the internet can have on verification, such as when North Korea displayed military equipment that violated sanctions in photos of a parade. The internet makes a much larger amount of information readily available, and the utilization of open source techniques can also open the nuclear arms control discussion to a much larger group of people, each with different skill sets. However, though this may potentially yield some useful bits of information, the amount of responses it enables could also make it ineffective. Likewise, the photos shared on the internet of the North Korean parade, though useful in detecting a single instance of sanctions violations, ensure that North Korea never makes that mistake again, which could prove detrimental to efforts to detect future sanctions violations.

    Jonathan also discusses using experts as consultants for nuclear verification techniques. There are many ethical issues intertwined with this strategy. For instance, who gets to be considered an expert, and how can others get to that point? Usually experts are identified by other experts, which could lead to a certain hierarchy that is hard to break into. Additionally, what motivates experts to make the decisions they do? Their standing as experts gives them ethos, but they could misuse such power for such reasons as monetary gain or popularity. If this is the case, what can be done to make sure experts make the decisions they do for the good of mankind? Despite these many ethical uncertainties, I can see that it is still extremely necessary to utilize experts as consultants due to their advanced education on the subject. However, I think it is equally important that government officials learn to analyze information provided by experts to make the right decisions for their nations, rather than attempt to coerce or tempt experts with some sort of bribe to potentially twist their opinions to align with those of a certain political party.

  4. I agree with most of the points Jonathan makes, specifically the idea that the article, while I don’t think is vague, is rather uninformative towards how to apply its nonproliferation methods. But I think that’s the point and shouldn’t be seen as a lack of knowledge by the authors — the article acts as a sort of database in itself to inform of various nonproliferation methods and the brief ethical qualms related to Biological Weapons Conventions, land mines, the Moscow Treaty, etc. Having an array of strategies with a briefing of the general climate towards international affairs (which is an inherently broad proposal) seems the optimal way to structure the article.

    I also have a slight disagreement, though, with saying that “North Korea now knows not to display sanctions violating equipment in photos of military parades, since blog posts containing the pictures have revealed a sanctions violation. However, if the violation were discovered by an intelligence agency using the same sources, North Korea may never learn how their sanctions violation was discovered”. Regardless of the movement towards clandestine intelligence strategies, a point of the article is to show that the inevitability of innovation is passing the current legal frameworks – transparency is inevitable, so it is using what is already available to us (WordPress, Youtube, etc). It may be making known this issue to North Koreans and threatening states at large, but it is working within a known framework, databases that North Korea can understand and restructure with self-awareness.

  5. This week’s piece on The Opportunities and Limits of Societal Verification, by Hartigan and Hinderstein, helped focus the discussion of global security on the responsibilities of the citizens. With the advent of social media and the ever-increasing role that the Internet plays in our daily lives, there now exists a combination of unprecedented access to information and the ability to communicate it to a mass public.
    As Paige notes in her response, this leads to two distinct pathways to the public involvement in nuclear non-proliferation: the involvement of “outside experts,” and the use of crowdsourcing techniques to gather information.
    Crowdsourcing in particular has become a more popular authority on issues of public security. In the case of the Boston Bombings, media attention immediately focused on the ability of online communities to coalesce and actively gather information on the suspects to aid local authorities. Crowdsourcing, in the context of nuclear proliferation, falls under three major categories: listening, probing, and mobilization. As the individuals role moves from passive to active, the types of questions that the individuals seek to answer evolve as well until the community is consciously trying to engage in ways to solve a problem. In order to contribute to the nonproliferation movement, Hartigan and Hinderstein argue that users should be doing everything in their capacity to funnel information from around the Internet in order for Governments and private enterprises to use.
    It seems that Hartigan and Hinderstein have an idealistic view of an alert and motivated society in which all the pieces work seamlessly together. Though this may be unrealistic, there does seem to be an essential role for the emphasis on observation and an active public in the struggle for nuclear nonproliferation.

  6. I think Paige brought up several strong points on potential issues of using international experts. The dangerous case of using international experts would be an over-reliance on their abilities and their decision-making – especially if the “motivation” of these experts is a monetary one. The unintended consequence would be the occurrence of experts being “bribed” by governments to discover a sanction violation in a country that may not actually be true. I think experts should work directly from NGOs and other international bodies and have just a minor connection to the government – their findings should be seen as suggestions to the government which can back it up with its own intelligence systems. This way, the integrity of the experts and of the “societal verification” can be held intact.

    The interesting aspect of Hartigan and Hinderstein’s article is that its broad discussion allows the reader to be the innovator in solving the problems with verification. While some might say it is unrealistic to think that the whole community would work towards a greater goal in monitoring nuclear proliferation, it seems correct to say that the average human being seems to strive for transparency from their government. We see this sort of social effect every time more information gets leaked about an intrusive NSA policy or a Wikileaks report on a speculative action by a U.S. intelligence agency. Citizens tend to enjoy access to information that may affect them because transparency is inherently a democratic idea. Because of this, I think gathering the whole global community together (especially through technological methods) is not a very far off idea.

  7. I agree with Jonathan that it is hard to imagine exactly how huge swaths of opensource data can be used efficiently and effectively to support non-proliferation treaty verification efforts. What analysis capabilities are required in order to gather, manage, and analyze this data and who would be responsible (e.g., a state actor, an international agency, an academic) for monitoring this analysis?

    That many states already use public, opensource data in their “national all source intelligence gathering” efforts supports Hartigan & Hinderstein’s claim that technological barriers won’t bar states from crowd-sourcing opensource data for non-proliferation related purposes. However, if opensource data is being used for general intelligence purposes, why isn’t it already being used for treaty verification purposes, specifically? Hartigan & Hinderstein argue that political and legal barriers are to blame for the difficulties associated with crowd sourcing public data for non-proliferation purposes. Specifically, they note how the public’s lack of awareness about non-proliferation treaties and verification processes negate the possibility of enlisting the public in identifying treaty violations. While public ignorance is a sizable barrier to the “mobilization” method of data gathering, “listening” and “probing” of opensource data can be carried out regardless of the public’s awareness of treaty specifics. I would be interested to consider more substantive political and legal barriers to the “listening” and “probing” methods in order to better understand why crowd sourcing public data is not a mainstay technique for treaty verification in states that already use public data for general intelligence gathering.

  8. The central point that should be taken from Hartigan’s and Hinderstein’s article is the need to drastically increase the U.S.’s ability to perform open source intelligence analysis as a means of monitoring and verifying arms control agreements. The article is correct in noting the great benefits and extensive information that can be gained from open source analysis on social media. Several DC-based think tanks, including the American Enterprise Institute (AEI), have a particular focus on open source analysis. As the article notes, however, most of these efforts are focused on “terrorism and areas of unrest.” AEI created the Critical Threats Project to perform open source analysis on the global al Qaeda affiliates by monitoring local news sources and social media accounts. An effort should be made to create a similar project focused on monitoring and verifying arms control in countries like Iran and North Korea. The article continually stresses the need for outside analysts to be organized into a cohesive network. As the article mentions, it is very important to ensure that these new networks remain independent from the government. This third party character while allow these groups to contribute unbiased, varying perspectives, which will help avoid the pitfalls of group think.
    Finally, the article notes the balance that must be struck between security and privacy in the age of increasing surveillance and open source analysis. First, I would note that the Edward Snowden analogy presented in the article applies to U.S. Government efforts to monitor U.S. persons. This is a very different issue than advocating for increased open source analysis of foreign countries and non-U.S. persons. Second, the existence of third party networks, such as think tanks, will help alleviate the public’s concerns over the government’s possible interference in privacy. These third parties will help make the process more open, balanced, and transparent.

  9. I would have to agree with the general sentiment that an increase in third-party, open-source verification of the NPT and other nuclear non-proliferation agreements is a very good direction to move in with regards to global security. As we begin to verify compliance with these treaties with more certainty, we will begin to move even more quickly towards complete disarmament. However, I do not believe that all verification should become open-sourced as there are some parts of verification that should only take place between governments. While there is a lot of data available online for third-parties to analyze, it is only through bilateral agreements like START that countries are able to go into other countries and conduct physical verification. I believe that it should stay this way given that both countries have a vested interest in upholding the treaty. This vested interest comes from the fact that both countries receive extensive and sensitive information about the locations of the other’s nuclear bases and nuclear capabilities. This is the kind of information that might, indeed, be too sensitive to be made public. Another important reason why this information should probably not be made public is it actually deters other nuclear weapons states from joining agreements like this because they have no incentive to disclose their own nuclear capabilities when they already have extensive information about the other country’s nuclear capabilities. I know that I have somewhat argued against open-sourcing verification, but I must still affirm that I think it is a very good idea that has already proven itself to be effective. I just think that it needs to be approached with caution as deterrence and non-proliferation interact with one another in very complex ways, often times ways that independent third parties cannot predict or sense. While pursuing open-source verification, we should strive to continue to create agreements like START.

  10. Jonathan argues that this paper would have been more valuable had it gone into further depth about incorporating contributions of outside experts into government analysis into the innocence or guilt of treaty parties, and in my opinion, the validity of his argument stems from the fact that I am not sold on the idea of crowdsourcing treaty verification information. I feel that crowdsourcing tools, as explained in the article, aren’t a viable supplement to existing treaty verification techniques for a couple of reasons. Governmental bandwidth and lack of necessity form one argument, and issues surrounding layperson involvement in intelligence operations substantiates another.

    The authors touch on the fact that arms control verification is not a priority for open source intelligence analysts and argue that open source intelligence gathering has not been incorporated into treaty compliance monitoring because the United States government may currently lack the resources to sort and analyze all of the information available (p.6). The fact that open source intelligence analysts do not prioritize arms control verification is not necessarily a pitfall of the intelligence gathering community. The Ifft reading for this week explained that, while monitoring and verification standards change, the standard of the US during the Cold War era was to be able to detect a change in the strategic balance before this posed a threat (Ifft, 4). The United States has had a successful history of sufficient monitoring and verification. The open source intelligence analysts are concerning themselves with predicting and thwarting acts of terror, which both presents a more immediate threat and lends itself more readily to crowdsourcing because many terrorists are influenced or recruited over social media and have active social media presences. Government intelligence analysts have missed incoming information before and some terror attacks have been successful because of it. Adding crowdsourced information on arms control verification may cause more clutter than what it proves to be worth unless there is an extremely efficient way to categorize it and sift through the noise, and these analysis efforts should not remove manpower from open source analysis regarding the war on terror.

    Secondly, the brief discussion on the gamification of arms control verification intelligence gathering, or “mobilizing” citizens through games, (p.4) was concerning. Where would the line between citizen and operative be drawn? What are the ramifications of becoming a low-level contributor to the intelligence community? What if a citizen uncovers information that later becomes classified, or that endangers them? Would they fully understand the implications of the games they are playing, or would they be bound by some click bait “I agree” pop up contract on a mobile app? The discussion on crowdsourcing left me with too many questions with possibly negative answers to feel that it would be worth adding to the arms control verification toolkit, but the paper served as an introduction, and maybe learning more about it would reveal more benefits.

  11. While reading this article I was surprised by the implication that the internet could be used by the common person to help gather and analyze data to help check the non-proliferation agreements. I suppose prior to this article, I looked at the problem with the internet in the eyes of the user- being an average user and not the government- and in noting the privacy and legality that the government has to abide by in order to protect its citizens rights. While Hartigan and Hinderstein briefly touch upon the privacy concerns that were raised by Edward Snowden, for instance, and the idea that the general public has become increasingly aware of the “footprint” they leave behind on the internet- I think the focus of the piece glosses over the severity of these implications and attempts to convince the reader that the opportunities far outweigh any concerns. I am not that convinced by this knowledge- and not because I believe the government shoudl not have access to my information if it would help national security- in fact I think there should be open-access to what is public online. What concerns me is where the line will be drawn. This document points to various ways to data-mine and that while some methods are more pinpointed and specifically “probe” the user to gather data, other methods are much more passive and can take the form of crowdsourcing. In fact, just using social media listening devices can compile a large amount of data that can be useful. My concern is if the government, and for that matter corporations, are able to gather this information without consent, how will privacy exist in any form in the future. Within just these past few decades Apple has taken the internet from a small network to a international system that touches almost every person’s life. That amount of data is unbelievably useful, I don’t disagree with Hartigan’s and Hinderstein on that, but I just am afraid of if in another 20 years the idea of privacy can even exist at all, even if we want it to.
    This ties into what was said in this blog post about outside experts then being the liaisons of sorts between the governments and the public. I agree that this idea could have been more developed, and then I guess my question is just again, how much data will they really have access to? I would want to believe that it would be limited to what is public, but knowing that national security allows the government more access makes me believe that they would be more than willing to allow access to outside experts as well who would be doing the government’s bidding. This then brings up the question as to whether these experts would need special privileges and whether that level of clearance is difficult enough to grant that we are no closer to using the regularly users as managers. I think Hartigan and Hinderstein are right in that the most difficult task if truly the implementation and political framework that is needed to use internet users as safeguards; furthermore, I harp on the point that privacy is going to be a major source of contestation in implementing such safeguards.

  12. I found the Hartigan and Hinderstein piece to pair with the article on Full-Motion Virtual Reality (FMVR) in a rather interesting way. As Jonathan discusses above, Hartigan and Hinderstein focus on the way in which there now exists so much available data that could aid in ensuring, among other things, treaty verification. This information comes through a variety of channels, both “passive” and “active”, terms they use to construct a spectrum of ways in which to collect relevant data. The issue therefore becomes not one of the presence of such information but the application of it. “It is now clear,” they write, “that the primary challenges are not necessarily technical, but operational and political.”
    In this light, FMVR seems to add another promising element to an already satiated pool of potentially useful verification technology. While the FMVR analysis suggests that virtual reality can have a significant impact in tackling various challenges of verification, I worry it and similar studies may go to waste. As Hartigan and Hinderstein note elsewhere in the study, “Growth in innovative software programs and big data expertise have far outpaced the requisite policy and legal frameworks, making the central challenge one of how to integrate societal verification information with existing data streams for arms control verification.” While FMVR does not fall under the header of data collection, I nonetheless see it as another contributing factor to the technological “outpacing”. Until these various elements are integrated into policy that is both straightforward and easily agreed upon, it may be that the scientific community sits one too many steps ahead of political reality.
    This is not to say that there are not any immediate solutions within the FMVR piece that can be applied to the article on societal verification. One element that stuck out was the observation by Hartigan and Hinderstein of the correlation between task complexity and necessary expertise. “There appears to be a direct relationship,” they explain, “between the complexity of a verification task and the level of expertise that would be required to complete the task.” The ability to run a high number of simulations at relatively little cost would seem an easy means to change this dynamic. FMVR could give verification exposure to a far wider group of enforcers. Though not a guarantee, this added practice could reduce the necessary expertise threshold, thereby creating a larger pool of sufficiently trained enforcers.

  13. While an increase in open-source verification of non-proliferation agreements would certainly be beneficial in holding states to their word, but I think that Hartigan and Hinderstein over exaggerate the availability of information that can be regarded as top secret and invaluable to a state’s national defense. Any state is generally disinclined to make its power known, creating uncertainty for other states that therefore have allow for error in their power calculations, allowing the dishonest state to project more power than it has. Now while the experts in the North Korea example did correct that source of error, it is only one instance and state power takes many forms that may be less detectable than a parade.

    The experts can only analyze whatever information that they are provided, which is why I think the article tried to address the problem of data management. There is just simply so much information available online that it would be challenging for any expert to sift through all the information pertaining to North Korean missile silos to find a “diamond in the rough”. I agree that there should be some sort of way to organize that date by pertinence. Nonetheless the section is vague and would require more elaboration to be seriously considered as an option. It is also an interesting idea to get the public involved in terms of mining more information from the government. Perhaps some sort of incentive like a tax reduction for citizens who get a certain app on their phone that feeds information to the intelligence community about its surroundings. Of course that would get into some very serious privacy issues, although the article takes the stance that the “opportunities are too great to ignore.”

  14. On page 8, the authors talk about some of the government actions that can affect societal verification: restricting access, censoring content, and protecting privacy. It seems to me, that in this era of fake news, a fourth category is missing- “deliberately releasing false information”. I wonder how this would have an effect on societal verification. I do not believe that the authors have a full grip on ways in which governments can restrict this method, and they fail to go into depth about them, as Jonathan pointed out. In North Korea, it seems that on top of censoring, they would try to mislead experts about nuclear underpinnings in their country, and this is something to consider moving forward.

    In addition, their claim that “public familiarity with treaties, and treaty limited items is limited” (6) seems to be an understatement as I am sure that many of us had no idea what the stipulations of the NPT or the Test Ban Treaty were before today’s lecture. But, I must ask the question, is it worth it to spend a lot of money on educating the public about “treaty relevant object? Although they state that there “appears to be a direct relationship between the complexity of a verification task and a level of expertise that would be required to complete the task”, it seems silly to me that we are discussing a widespread public education initiative about these treaties, as societal verification on a grand level seems impossible. If this societal verification is limited to experts and their networks, it mostly seems just like the evolution of think tanks from in-person to online, so is this really the mass datatization that they discuss? Or is it simply the way forward in academic circles?

  15. This article and this blog post both seem to focus on the positive aspects of societal verification. While I agree that there are many encouraging elements to the use of outside/public experts in arms control verification, I couldn’t help but to fixate on the negative implications of such a system while reading this post. In a worst case scenario, I believe that these negative implications outweigh the positives, and thus make societal verification more of a global security risk than a reward.

    Societal verification has been useful thus far because it is unexpected. However, as it becomes more widespread, I believe that many countries (including the United States) may try to take advantage of this fact. I envision a scenario where governments begin to manipulate the data that is publicized (I don’t fully understand the rules and regulations regarding the data that is made public, but I wouldn’t put the publication of false data past a nation like North Korea, for example). In doing so, such nations can lead the experts astray, resulting in the dissemination of inaccurate “societal verification”. If executed well, an adversary could convince experts that certain nations have nuclear weapons when they in fact do not, and vice versa. Ultimately, as societal dissemination becomes more of a standard, I believe that governments may use this to their advantage, and publicize false information in order to either protect their secrets, and/or create conflict.

    Furthermore, it is also concerning that there does not seem to be an official system in place to verify the experts that are doing the verification. If the above scenario plays out and governments allow for the publication of false data, the experts will make false discoveries. Furthermore, if there is no system in place to validate the truth of their findings, it could potentially lead to widespread panic and conflict. Imagine an expert that believes they have proven that Iran has a nuclear arsenal. If there is no official source that can somehow verify these findings, various media sources may in turn publish it as news. Eventually, the entire country will believe that Iran has nuclear weapons, even if the original discovery was based entirely on false evidence.

    Ultimately, I see many potential pitfalls in a societal verification system. While there are benefits to having public experts sifting through information and conducting arms control verification, I fear that the possible negative consequences may outweigh these positive implications.

  16. I like how Jonathan raises many valid concerns with the paper by Hartigan and Hinderstein, The Opportunities and Limits of Societal Verification. I share similar questions of why the paper failed to address the questions it raised or go in depth enough to offer truly insightful messages as this paper felt more like a broad overview on the subject rather than a research article offering a solution.

    While I agree with the paper that the vast amount of data needed to be evaluated for a proper investigation of nuclear proliferation may lend itself to having experts in society helping out, I (like Jonathan) finished the paper unfulfilled by the lack of tangible suggestions on the writers part. Experts who are filtered into society offer an incredibly powerful tool to combat the fears of ‘big data’, a fear that perhaps we have too much data and it is overwhelming our ability to accurately evaluate it all.

    Through using experts in society, governments can find ways to harness the full strength of it’s assets yet this use has it’s drawbacks as well. As the paper discusses, a government may be wary of turning to this form of nuclear verification as it’s public focus doesn’t lends itself to espionage or secrecy. Disseminating information on nuclear verification is also another aspect that the government is likely nervous about. Nuclear states are likely protective of all the recent developments in nuclear proliferation and therefore won’t be inclined to inform a random expert in society on what these new plants/devices might look like. Therefore, it raises the question of what knowledge does an expert need to have to be able to identify these signs as in the case of the Iranian government shutting down certain social media websites during the green revolution.

    This paper didn’t offer a perfect system in which to harness these lay-experts, however I would be interested to see a system implemented that allows citizens with a passion and focus on these sectors to make a difference. The problems of being overloaded by big data is a real issue that is attempting to be solved through machine learning or more manpower, and in this case the answer may be found in the hands and eyes of societies’s experts.

  17. The authors explained the potential of the public to be involved in the detection of arms control treaty violations. With rapidly developing technology, the public can play a role in the detection of potential violations with the use of common devices like their smartphone. The article even says that people are working on using social media platforms, like Twitter and Facebook, as tools for verification. I agree that a problem not adequately addressed is “fake news,” but particularly fake news from the general public. What are the effects of users reporting false information? Is there an effective way to determine the validity and truth of a report? Because as we can see in any online article or video, there are always trolls and scammers, so there must be an effective way to sift through the real and fake comments. The authors later state that “societal verification contributions will always be fundamentally unreliable as a stand-alone source given the prospects for government intervention and manipulation” but fail to address that it may be fundamentally unreliable simply because of public error.

  18. I agree with Craig’s observation that “crowdsourcing in particular has become a more popular authority on issues of public security.” Nevertheless, I am hesitant to agree that that is necessarily an all encompassing good, as he does with his example of the Boston Marathon Bombing. For example, in that case, social media and crowdsourcing initially coalesced around an individual who was in no way related to the bombing, and had only been focused on due to his demographics. While that is a rather extreme case, and I while I do believe that transparency (to a point) can be a useful aid in widening the perspective of government officials as well as a good secondary safety net for potential threats, I believe that similar inaccuracies could occur in the realm of nuclear verification. If all of this information is public, it means that any number of reports (true or false), warnings, or alerts could be posted to the internet without any kind of check, and the realm of nuclear verification could be condemned to the level of conspiracy theorists and fake news.

    As Naomi points out, there must be an effective way to sift through the real and fake reports. The best way to do this (given the inevitability of this access as stated by the authors and in earlier posts), would be by using the experts that the authors fail to adequately define in their article. And there are times that this approach could yield substantive results, such as in the North Korea example. Even with that, however, the crowdsourcing of nuclear verification will inevitably lead to the proliferation of false reports, some of which will be hard to distinguish from the truth. It is not for nothing that much of these verification checks are done by experts and intelligence agencies with the requisite personnel and resources. Given this, it might be more feasible to implement a type of crowd sharing forum that is limited to experts in the field (a la how much of the astrophysical and astronomy fields do generally operate globally), instead of being posted universally on platforms such as YouTube and WordPress, in order to minimize the amount of false reports that are generated.

  19. As Jonathan and most of the people on this thread have addressed, the paper seems to be a very optimistic outlook on the future of collaboration between citizenry and governmental organizations towards the goal of verification. The article spends most of its volume detailing specific ways that private experts can provide assistance in various forms to government bodies that help with verification efforts, and providing positive examples of ways that private citizens have assisted investigation efforts in the past, such as in the case of the Iranian blogs. The article puts faith in Youtube, saying that it “carries some unique and honest-to-goodness intelligence. There are methodologies involved… We have groups looking at what they call “Citizens Media,” people taking pictures with their cell phones and posting them on the Internet.”

    However, as many on this feed have mentioned, the article does not provide enough substantive rationale to justify its rosy tone. First, the article’s optimism in terms of what it expects the private population to accomplish is almost absurd. It takes the popular slogan “if you see something, say something” and seemingly assumes that not only will every citizen follow this mantra diligently, but also that they will provide espionage-quality reports and intelligence on the subject that they see. If this prospect was not unlikely enough to dampen the overall tenor of the article, the authors also seem to generally disregard the role of national intelligence systems for the foreseeable future. The article does speak about the need specifically for the government to be able to monitor more information simultaneously, but it then returns to the point that the government should accept the assistance of private groups, such as the AEI, brought up earlier in this thread by Coy, in such data gathering and sorting.

    Furthermore, there is a reason that people dislike the idea of a government with far-reaching intelligence capabilities; it is evocative of a body similar to that of Big Brother in the Orwellian classic 1984, a body with the ability to monitor all thought and punish any hints of dissent. But this problem is in no way solved by handing over a portion of that intelligence capability, and therefore a portion of that incredible power over the lives of individuals that one may or may not know, to private citizens or groups, who, unlike the government, are elected by no one and accountable to no one other than themselves. Although I see some reasons to be optimistic about the potential of government-private sector collaboration in verification and intelligence gathering, I hesitate at condoning it.

  20. Hartigan and Hinderstein’s article, “The Opportunities and Limits of Societal Verification,” delves into the topic of social verification as a means of keeping governments accountable in the realm of arms control verification. Throughout their piece, Hartigan and Hinderstein emphasize the potential for ordinary citizens to provide crucial information “to supplement traditional arms control verification techniques.” This idea stems off of the belief that members of society act as the government watchdog – in this case, citizens, especially now that they are armed with technology-sharing capabilities, can keep the government accountable to its agreement on nuclear arms. In his response, Kevin says, “While some might say it is unrealistic to think that the whole community would work towards a greater goal in monitoring nuclear proliferation, it seems correct to say that the average human being seems to strive for transparency from their government.” I agree that the average citizen would assist in the gathering and sharing of important data, should those actions lead to a more transparent government, as they naturally would. The government may be able to hide its nuclear actions from other countries, but it is very difficult to hide it from all of its citizens as well. Now with these citizens threatening to report out-of-line government behavior, the government is forced to think twice about its actions before violating an existing agreement. However, it is important to note, that as this topic stands, simply letting loose the public and asking them to share relevant data/evidence of a country’s nuclear activity is not the solution. Citizens need to be educated on what to look for, what these signs mean, and how best to report them. Creating a streamlined process for education of social verification of nuclear arms control is a crucial first step if social verification is to ever be sufficiently employed.

  21. I think that this idea of societal verification is fascinating and is one that I was honestly not really aware of until reading this piece. I agree with most of Jonathan’s points that the article did a decent job of highlighting the different possibilities for societal verification while there are also some drawbacks. I also agree with Lucas that I am not so sure that the last point – the public nature of the discoveries – is such a negative as opposed to other methods.

    I do think that Amanda and Yasmeen bring up great points regarding false reporting, manipulation of data, and checks on credibility. I suppose that all reports are to be thoroughly checked and vetted, but many of these high-risk situations require prompt and steady action. I see it quite plausible that societal verification could lead to a hasty or misguided decision, which is a little terrifying. Yet, I am not sure if that is cause to dismiss societal verification completely.

    The bottom line is that societal verification should probably be taken with a grain of salt – meant more to highlight a certain area or direct an investigation in a certain area. Whistleblowers are also not always credible as they may have ulterior motives. Thus, with the Internet, I feel like it would be foolish to completely discount societal verification but it should be treated more as guesswork or guidance rather than gospel.

  22. While I do think that using experts to crowd-source surveillance is a great method in theory, this methodology faces the principle-agent problem, in which one party (agent) who is working for another party (principle) may have motivations that differ from the principle. Some people have mentioned specific manifestations of this problem, such as monetary incentives distorting the information crowd-sourcing would provide, as well as the possibility to disperse false information. Thus, there are two main questions people have been addressing: 1) Is it possible to ameliorate/eliminate the principle-agent problem? 2) If so, how?

    I believe the principle-agent problem could be ameliorated by a system of retro-actively provided incentives, such as monetary benefits given after the crowd-sourced opinion has been verified to be true. Conversely, disincentives could be doled out retroactively as well. If a claim turns out to be false, the expert who made the false claim could be fined and punished more severely for repeat offenses.

    Also, mentioned in a previous post was the idea that crowd-sourcing is a new method that is only effective due to its novel nature, meaning that as states catch on to this idea of crowd-sourcing, they could possibly leak false information to mislead people, taking advantage of the weaknesses of crowd-sourcing. However, the dispersal of misleading pictures and propaganda is already used, and so I don’t believe that crowd-sourcing will be as easily countered by misinformation than one may think. However, I do agree that crowd-sourcing is a tool that needs to be used with precaution, perhaps as a type of screening device. Overall, this methodology does have solid merits, and it would be unwise to turn a blind eye to them.

Leave a Reply