Crowdsourcing Nuclear Arms Verification

“Crowdsourcing” and “Nuclear Arms Verification” these two terms are rarely seen together in the same sentence. One invokes the image of the ambitious youth at Silicon Valley, while the other seem to be a term only applicable to in extremely limited realms of D.C., Moscow and Vienna. Yet, as the readings for this week suggest, this idea being incorporated into future nuclear arms control treaties may not be too far-fetched.

The JASON report explores exactly this idea of taking advantage of crowd-sourced data management to effectively verify compliance and detect violations to arms controls regimes. The report takes notes of two possible avenues of such scheme – crowd-sourced data gathering and crowd-sourced data processing. The former refers to schemes in which individuals are encouraged or incentivized to gather and share relevant information, such as measurements, images, etc. In the latter scheme, individuals make use of public data (e.g. satellite imagery) to identify certain trends/patterns that could aid in detecting treaty violations. In both of these cases, recent technological advancements have magnified their potential and effectiveness. The JASON report notes that the pervasiveness of smartphones among ordinary citizens have created a reservoir of photographs at an unprecedented scale. Similarly, projects and firms such as PLANET illustrate how technological advancements, in this case small scale satellites, have greatly enriched the information available in public sources, which can then be exploited via crowd-sourced schemes. The author notes how the key to constructing an effective crowd-scheme is to provide participants with the means (e.g. public data and, a clearly defined outcome metric. Thus, this could be applied to nuclear arms control treaties, so long as there is a limited scope of irregularities to look for.

Needless to say, this idea of “crowd-sourcing” nuclear arms verification comes with significant risks and drawbacks. The JASON report makes a point that unlike earlier crowd-sourced information gathering schemes for terrorist activities, crowd-sourced nuclear arms verification may entail citizens effectively participating in the verification of their own country. These factors necessitate a strong ethical and legal framework that would protect the individuals participating in the crowd-sourcing, as well as preventing harmful information disclosure. Furthermore, touching upon the Woolf reading, past frameworks for arms control verification devote significant attention of strictly limiting the information sharing to those necessary to detect violations of the treaty. Should this framework be opened up to the public for crowd-sourcing, a monumental challenge would be to maintain the boundary of the public’s activities to that relevant to the treaty, and to prevent the public from uncovering classified information.

The promises and consequences of crowd-sourced arms verification, the main question I would like to pose in this blog post is the following: would the United States (or any other country) be compelled to adopt this framework for future treaties to prevent other nations gaining a comparative advantage? So far, the START treaty and the new START treaty between the United States and Russia rely on bilateral verification via NTM. Following the proverb “if it’s not broken, don’t fix it”, it makes sense for future regimes to continue to adopt this bilateral verification scheme unless otherwise. On the other hand, for offensive and defensive capabilities, technological advancements continually compel nations to update their arsenal to prevent their adversary from gaining an edge in its competition. This leads to the question – would there be a scenario in which the United States would be placed at a disadvantage because it will not adopt this new initiative of crowd-sourced verification/surveillance? — Kouta

7 thoughts on “Crowdsourcing Nuclear Arms Verification

  1. Kouta raises quite a few important questions, the main one being “Should the U.S. adopt a system of crowd-sourced verification/surveillance for nuclear arms verification?” Personally, I think the JASON report proposes an interesting idea, but one that is overall quite complex.
    To be fair, I remember reading this interesting article in Forbes just a couple of years ago (link: https://www.forbes.com/sites/prossermarc/2016/03/10/how-a-crowd-science-geiger-counter-cast-light-on-the-fukushima-radioactive-fallout-mystery/#15e96d3a201b), in which a crowd-sourced Geiger counter was by over 200 volunteers to help understand the radioactive fallout after the Fukushima nuclear disaster. A quote I found interesting was the following: “The at time insular nature of Japanese organisations, the desire to not report bad news to superiors or to lose face have all been cited as reasons why the initial response to the accident was marred by lack of information and refusal of acceptance of the results gathered by outside organisations….Ironically, data from organisations like Safecast (that initiated the “crowd-science”) increased confidence in the Japanese government.”
    While it is interesting how well the above seems to have worked for Japan, there are obvious differences to account for and considerations to be made when it comes to using crowd-source for nuclear verification and surveillance in the United States. First, although there is evidently dissatisfaction with the government in the U.S., I am unsure that there is enough distrust in nuclear data and measurements that would support the argument that involving citizens in the process could ironically increase trust in the government. Second, and more importantly, is the point raised by Kouta – nuclear data is, for the most part, extremely sensitive. Citizen participation is one thing, but providing information about nuclear developments and capabilities to the public is another. Of course, it is probably possible to surmount these challenges with the very technology that creates them, but the time and expense that would take may outweigh its potential benefits. Perhaps the most persuasive point comes about from the last question in the post: would we be at a disadvantage if we did not do it? I think not. It seems that, in general, most of the “nuclear powerhouse” countries, perhaps with the exception of France and the United Kingdom (which are both strong allies of the U.S.), are unlikely to consider sharing nuclear information with their citizens anytime soon; Russia, China, North Korea, and India have rather oppressive regimes, and it is difficult to envision any one of those countries allowing its citizenry to participate in governmental nuclear programs; to me, the idea there is much more far-fetched there than it is in the United States.
    Still, despite these drawbacks, the JONAS article makes the point that it could, if well-controlled, be beneficial to crowdsource nuclear verification and surveillance. I wonder if this could be done in only very specific cases where sensitivity is less of an issue; for instance, to help measure radiation fallout as in Fukushima should a similar disaster occur on U.S. soil, or if there is an urgent need for quick data collection that the government is unable to do with limited resources. On a broad scale, however, it seems that this is a complex and somewhat dangerous program to implement; although I am all for more inclusion of the public in governmental functions and decisions, the potential devastation of the nuclear issue makes me a bit weary of releasing this kind of information to the public, especially given the interconnectedness of the Internet that we discussed with our guest Ed Skoudis.

  2. There are still a couple of things about the use of crowd-sourced data gathering in nuclear verification that I’m unclear about. First, what kind of “incriminating” evidence could a member of the public provide on the country’s nuclear activities? The country’s nuclear activities, especially if they are illicit and intended to bypass the terms of an agreed treaty, are likely to be top secret and made known to very few people. How could an ordinary member of the public be made privy to that kind of activities and knowledge? It seems that only certain government officials or workers would be able to provide that information. Second, how will a country that seeks to verify the nuclear activities of a rival country recruit people for data gathering in a manner that escapes detection from the rival government? Recruiting people for data gathering is a process that requires, presumably, some kind of public posting.

    In response to the Kouta’s question as to whether a country should be compelled to undertake crowd-sourced data gathering to prevent other countries from attaining a comparing advantage, I believe that a country should not engage crowd-sourcing for nuclear verification purposes in a unilateral manner. I feel that treaty verification is based a large amount on trust and good relations between countries. If one country were to seek to illicitly crowd-source data on another country, this could lead to a breakdown of relations and the treaty, especially if the country that was being crowd-sourced found out.

    However, I do think that the verification through crowd-sourcing could be implemented as a term of the treaty, so each country party to the treat reserves the right to use crowd-sourcing for further verification purposes. This could strengthen the deterring force of the treaty. In essence, I feel that crowd-sourcing should not be used as an illicit way to check up if a country is abiding by the treaty, but it could be used to strengthen the verification measures implemented in the treaty.

  3. The benefits of crowdsourcing are an open secret in many fields, especially in the biological sciences, where allowing the general public to try their hand at unfolding protein sequences has yielded significant results. Verification, on the other hand, is a more complex issue due to the numerous security concerns associated with putting sensitive data in the hands of citizens. However, it may still be possible to leverage the crowdsourcing model used in scientific applications to be a viable form of verification.

    The JASON report mentions two ways in which ordinary citizens could aid nuclear verification efforts: data collection and data analysis. The former poses an inherent security risk since, as mentioned before, this would usually involve citizens collecting data on their own government. This does not necessarily have to involve giving the general populace the right to record classified nuclear secrets; for instance, something as simple as crowdsourced monitoring of activity at a particular enrichment plant or missile production facility would not directly cause a leak of state secrets, and anything out of the ordinary would be easy to spot (an unusually large number of trucks entering an enrichment plant, for example, could be something that a watchful citizen could consider reporting). However, this would be impractical for a number of reasons. Citizens may consider this equivalent to spying on their own country and unpatriotic, and those who do agree could face stigmatization by their peers. Furthermore, authoritative/oppressive governments could actively discourage crowdsourcing.

    Most of the factors that render data collection by the general populace impractical stem from the fact that it involves citizens collecting data on their own country (having them collect data on another country would be even more impractical, and be an unnecessary duplication of resources as trained inspectors exist for the very same purpose). Crowdsourced data analysis, on the other hand, is much more practical. The data collection can be done by trusted authorities with the required security clearances, or by internationally accredited inspectors, and the analysis can be left to the public. One of the key factors in implementing this is abstraction. For example, the University of Washington presented a protein-unfolding problem in the format of a video game, and a crowdsourced solution was found in three weeks. Abstraction serves two purposes: it allows the providing entity to present the data in such a way that sensitive material can be obscured, and secondly allows the public to participate even if they do not possess sufficient technical expertise. For example, it is feasible that the analysis of satellite imagery could be transformed into an online game that anybody could take part in, aiding verification.

    However, it is important to note that the nature of crowdsourcing makes it best a supplement to traditional verification measures. Nevertheless, verification protocols should not discount the possibility of including crowdsourced data analysis as one possible measure that could be implemented.

  4. Kouta raises the question if the United States will be placed at a disadvantage if it does not adopt a crowd-sourced verification/ surveillance. In saying this, Kouta implies that the governments have the decision making power in determining which direction we move forward, and if the U.S. government were to decide against crowdsourcing, we may be at some sort of disadvantage.

    I think that Kouta is partially right — if the U.S. government took the initiative to organize and make easy the crowdsourcing process, its benefits would become maximized. However, I also think that even without any government making an active decision, the world will inevitably move towards the crowdsourcing model.

    I recently listened to a four-episode podcast series on Planet Money (https://www.npr.org/podcasts/510289/planet-money), a series devoted to how the Planet Money team, in collaboration with PLANET, sent a small satellite to the outer space. When a small, non-profit organization willing to spend less than ten thousand dollars on sending a satellite somehow finds the way to do exactly this and is able to take high-resolution pictures of North Korea, crowdsourcing is inevitable.

    Sending small satellites up to the outer space has never been easier. The Planet Money podcast talked about how there is a rocket start-up that eventually wants to send a rocket into the space once every week and carry with it hundreds of small satellites. It is not infeasible for wealthy individuals and mid-sized companies to send their own satellites now, and all of them have the ability to take pictures of North Korea, Iran, etc. They can decide to create a pool of this information for public usage without any help from the government.

    Crowdsourcing seems inevitable to me, not only in the U.S. but everywhere else in the world. I think that crowdsourcing will happen and drastically change the norms of verification/ surveillance. For one, it would be harder for countries to hide how much intel they possess on other countries both from the other countries in question and the general public. Information will become more widely available to everyone, and secrecy will play less of an important role.

    Given that crowdsourcing seems more or less inevitable, I think it makes sense for us to start thinking about how to play it to our advantage the most.

  5. Although there are few people who would contend that crowd-sourced data analysis & collection are not generally beneficial/useful, there remain a number of limitations making their application to nuclear/arms verification impractical in implementation.

    The JASON report mentions two general categories of crowd-sourced work: crowd-sourced collection and crowd-sourced analysis. It seems to me that the report could have done more to make this divide more stark; in fact, these two types of efforts face significantly different challenges to their execution.

    Data analysis, unlike data collection, presents far less resistance to public participation, given its relative geographic independence—in other words a collaborator need not live in China (e.g.) to contribute to crowd-sourced data analytics evaluating meta-data or overhead imagery pertaining to Chinese nuclear facilities. As a result, the cost of getting involved is relatively low by comparison. When it comes to data collection, however, these risks increase precipitously. For example, to collect data on-the-ground, geographical location becomes far more pertinent. As a result, potential participants in crowd-sourced collection efforts now face the prospect of negative political/legal/physical consequences carried out by local/country governing bodies or groups, especially if said groups/governments are not in favor of increased transparency surrounding their sensitive programs. In fact the sad reality is that the countries for which data would be most useful are also those that are the most likely to resist crowd-sourced collection efforts. Particularly under more oppressive regimes, the extent to which fear of political, social, or physical reprisals for collaboration would inhibit public participation should not be underestimated. Additionally, we face the issue of injecting a highly-transparent process into an incredibly sensitive field. Even if we ignore the fact that much of the most useful data is highly classified and thus out of reach of open-source (public) collection means—which we shouldn’t—we are bound to face at least *some* level of resistance from governments to public attempts to collect data outside the realm of controlled information. Bringing cameras or sensors (as the JASON article mentions) into or even near sensitive facilities/areas will inevitably raise concerns on the part of governing bodies, to the detriment of both the participants’ position as well as the extent to which governments trust the int’l system. The U.S., along with any other government, controls sensor/camera use near restricted facilities, illustrating how such collection attempts would likely face non-trivial resistance, if not legal response. On that note, any crowd-sourced collection framework would need to address the fine line, mentioned in the report, between crowd-sourced collection and espionage. Even as I read the report, recommendations of using citizens to collect data on, and sometimes against the wishes of, their government and its activities, and to “train and ask staff in foreign missions” to actively seek out further collection sources struck me as something that could easily be construed as nefarious be reluctant governments. Even if entirely pure-in-motive, an the U.S. encouraging, training, and equipping citizens of a country to collect information on sensitive activities, with the support of U.S. overseas officers/employees comes across (or could easily come across as, to a suspicious government) as eerily similar to traditional espionage efforts involving the recruitment, training, and equipping of local informants.

  6. In response to Kouta’s question, I do not believe there is an effective way for a trustworthy available verification/surveillance system of nuclear arms as has been proposed in JASON. However, JASON also mentions data analysis crowd sourcing which I do believe is vital to the United States Strategy of verification and arms control.

    The reason I feel there is no strong case for decentralized crowd-sourced verification of nuclear weapons is that it requires the citizen to monitor their own country, or a non-citizen to risk punishment by spying on foreign soil without approval of their homeland. Another reason I find the crowd-source argument weak is that restricted areas extend well beyond the visible range of weapons and their storage facilities. This could be countered with watching shipments of weapons but that requires 24/7 undetected surveillance on most-likely a military base for any possible vehicle a nuclear weapon could be smuggled out of. Also though the cost of launching satellites has dropped significantly, the Economist reports, “Although there is no standard price list for a launch, a CubeSat costs roughly $100,000 to put each 1.3kg unit into low-Earth orbit.” https://www.economist.com/news/technology-quarterly/21603240-small-satellites-taking-advantage-smartphones-and-other-consumer-technologies
    This article is talking about the cheapest orbit as well, which is the Low Earth Orbit (LEO). LEO also entails that one’s satellite does not above a target location (that orbit is called Geo-synchronous orbit [GSO]). Thus, the requirement for always available surveillance of nuclear weapons is failed again by the means of crowd-sourcing.

    There is also the problem of licensing. To get a clearance from one’s government launch you would have to apply for government approval(which then clears it with the US with a plan for a launch date and the projected orbital and duration of a mission (especially if launching a small in-expensive satellite like a startup might, it is highly likely that the mission duration will not last long as the earth’s gravity and atmosphere resistance causes the small satellite to fall back to earth and burn in the atmosphere). in this process one would have to explain why they would require such a high-resolution imaging satellite to achieve their mission. The regulation and barriers for the crowd to enter into surveillance and verification of nuclear arms hinders any hope for a function system to develop without state aid.

    Though I do not have hope for crowd-sourced systems of surveillance and verification, I do believe crowd-sourced automated analysis will help states with verification processes. The problem JASON presents is how to do this while maintaining clearance levels and state secrets. One import area is the automated analysis of publicly available satellite imagery and applying those methods to higher quality military-grade satellites. To illustrate the difference the military is allowed to have any resolution satellite (to the best of my knowledge images where each pixel is 3cm x 3cm) where industry is limited to 30cm x 30cm resolution https://platform.digitalglobe.com/earth-imaging-basics-spatial-resolution/
    One major factor aiding this movement is the explosion of satellite image datasets, which aid machine-learning techniques developed by the crowd and can be generalized to other (non-publicly available) datasets. This has spurred much support of the US intelligence community sponsoring image analysis research and competitions https://www.iarpa.gov/challenges/fmow.html. This use of crowd-sourced analysis is what I believe the JASON report was envisioning in part. I believe the success of competitions and research incentivize continual use and investment from states into techniques of analysis that can be generalized to problems the state has without compromising security.

    Aside: This topic is very close to my senior project, as I constructed a neural network to automatically identify buildings from satellite imagery taken from IARPA’s map of the world challenge (https://www.iarpa.gov/challenges/fmow.html)

  7. In response to Kouta’s question, I don’t think the US or any specific country would benefit from crowd sourcing for verification. Rather, I think an international body like the IAEA, another UN commission, etc. would benefit more from the crowdsourcing. While it is not clear what would be considered incriminating evidence in the JASON Report, like William pointed out, I don’t think that would be a problem. It seemed that the proposal for crowdsourcing was to expand data sources for a more rigorous analysis of a region, many perspectives, and different types of evidence. This seems to be the model that PLANET is using: continuous satellite imagery to catch every minute change. This is critical, especially in terms of verification as what might be seen as normal to the laymen’s eye, if then photographed or recorded and sent into the database, might actually be evidence of non-compliance or complete misuse to the expert who may have never had access to that specific piece of evidence. While it may also result in a lot of unnecessary data points, it could help.

    I think a crowdsourcing option would also make more people aware of the arms systems in their respective countries, countries they are visiting, and on the world stage in general. If the goal is to reduce the number of arms overall and to verify that treaties are being respected, getting the populace involved, making them aware, and explaining the system, even at a basic level, could help increase control and reduction efforts. In the PLANET videos they talked about the access people now have due to the constant satellite imagery and how much they can learn about agriculture, weather, nuclear fallout, disasters, etc. Getting people involved in crowdsourcing verification can have a similar educational effect and provide people with more access–if the material is not classified. This could be particularly possible if the verification system is through a posting system or through the personal sensor system mentioned in the JASON treaty. These systems make aspects of our daily lives–social media, our phones, and other sensors–convenient ways to contribute to global verification, to learn about it, and to become more aware.

    On another note, while this is not directly related to the readings, the JASON report and PLANET videos reminded me of an article I had read about a crowdsourcing site for tracking the humanitarian crisis in syria: https://syriatracker.crowdmap.com/. I remember one article talking about a man tracking photos to identify and try to attribute different chemical weapons attacks and then later air-raids on civilians to government forces or Russian forces. While this was in relation to proving that the weapons and places were government property and not arms control verification, it shows the power of using crowdsourcing to track attacks, weapons, planes, etc. this project started in 2012 and is still going. This is similar in a way to some of the aspects of verification and shows that it can be a very powerful tool–and that was with one person combing through all of the data he found/was sent. Imagine if there was a whole task force/committee in the IAEA devoted to this!

Leave a Reply