1927:full

Hybrid Threats: Challenges for Intelligence

November 1, 2018

by Gregory F. Treverton

NOTE: The views expressed here are those of the author and do not necessarily represent or reflect the views of SMA, Inc.

Hybrid threats cover the range from propaganda to what is often called warfare in the “gray zone,” using proxies or “little green men”—a range laid out in Table 1.[1] Yet the focus of more recent concern is threats and attacks that seek to remain below the level of kinetic war. There, the intelligence challenge is less familiar, for it begins with recognizing that the targets are societies, not armies; that several tools are used both simultaneously and strategically for maximum effect; and that the cyber dimension, along with the social media (SM) and virtual realms offer new, inexpensive avenues of attack.[2]

Table 1: Range of Hybrid Tools

Tools Examples
Propaganda Enabled and made cheaper by social media, also targeted at home
Fake news “Lisa” was portrayed as a Russian-German raped by migrants
Strategic leaks Macron emails leaked 48 hours before the French election
Funding organizations China opened Chinese think-tank in Washington
Political parties Russia supports sympathetic European parties on right and left
Organized protest movements Russian trolls organized both pro- and anti- protests in Houston Mosque case

Cyber tools:

  • Espionage
  • Attack
  • Manipulation
New tool in arsenal: espionage is old tactic with new, cyber means. Attack has targeted critical infrastructure, notably in Estonia in 2007. Manipulation is next frontier, changing information without the holders knowing it.
Economic leverage China sought to punish South Korea for accepting U.S. anti-missile system
Proxies and unacknowledged war Hardly new, but Russian “little green men” in Ukraine slid into actual combat
Paramilitary organizations Russian “Night Wolves” bikers intimidate civilians

This piece begins there, with the tools, then turns to the challenges of hybrid threats for intelligence. Then, it takes up what is new, the special challenges—but also the special opportunities—of the cyber and virtual realms. The following section focuses on opportunities and picks up the implications for the organizations performing the traditional INTs—HUMINT and SIGINT, especially—and for counter­intelligence. It concludes with lessons for both intelligence and warfighters.

Hybrid Tools and Challenges for Intelligence

What special challenges do the range of hybrid threat instruments in Table 1 create for intelligence? Table 2 lays out the typology of intelligence problems—puzzles, mysteries, and complexities or wicked problems.[3] Hybrid threats are in the third category. They are wicked less because they involve new actors interacting in ways we haven’t seen, as was the case with terrorists after 9/11. Rather, “by emphasizing elusiveness, ambiguity, operating outside of and below detection thresholds, and by using non-military tools to attack across all of society, hybrid threats represent a new iteration of the complexity found in wicked problems.” [4]

Table 2: Puzzles, Mysteries and Complexities or Wicked Problems

Type of Issue Description Intelligence Product
Puzzle Answer exists but may not be known The solution
Mystery Answer contingent, but key variables, history, analogy Best forecast, perhaps with scenarios or excursions
Complexity, aka Wicked Problems Many actors responding to changing circumstances, no established pattern “Sensemaking”? Perhaps done orally, with intelligence and policy interacting

The challenges of these wicked problems are fundamental, and they touch on all aspects of the intelli­gence process. Most fundamental is the challenge to “truth” in the digital age. The great irony of infor­mation technology is that all the wonderful applications that were meant to connect people end up by letting them live in their own “echo chambers” where they can see and hear only what they already agree with. The echo chamber effect—or at least the polarization—also holds for material consumed through traditional media, perhaps even more than for social media (SM). [5] Moreover, all the studies suggest that fake news propagates faster on SM than real news.[6] In these circumstances, there can be tolerance for or normalization of ‘bullshit,’ in Harry Frankfurt’s memorable locution.[7] At worst, the trend could lead to a kind of nihilism about ever knowing “the truth.” In any case, the truth seems to be made relative.

The nature of cyber threats and SM spawn a series of related challenges. One is maintaining credibility, both with policy counterparts and with the public. When there is so much information out there, and so many options, how does intelligence lay claim to special credibility, all the more so when leaks are weaponized, like Russia’s release of hacked emails from Hillary Clinton and her campaign manager in the 2016 U.S. elections?[8] To be sure, in that case, the U.S. president himself undermined the credibility of intelligence in the public’s eye by frequently seeming not to accept the firm conclusions of the U.S. Intelligence Community in its January 2017 assessment.[9]

Hybrid threats enlarge what might be called the “intervention space,” both physically and virtually. So far, Russia has been by far the largest practitioner of hybrid threats—hence the main intelligence target. However, China has been active too, and there have been hybrid operations in the Middle East and South Asia. The low costs and possibilities of escaping attribution will bring more countries, as well as non-state groups, into play as targets for intelligence and counterintelligence.

And with virtual tools, geography disappears. That was driven home by the 2016 Houston case. In May, a Facebook page called Heart of Texas encouraged its quarter million followers to demonstrate against an urgent cultural menace—a new library opened by a Houston Mosque.[10] “Stop Islamization of Texas,” it cried. But the other side organized as well. A Facebook page linked to the United Muslims of America said that group was planning a counter-protest for the same time and place. In fact, while the United Muslims were a real group, the Facebook page was not its doing. Both Russian trolls had organized both the anti and pro demonstrations.

The challenges for intelligence dealing with policymakers begin with understanding that they often are not well versed in hybrid threats—witness the U.S. Congress meeting with Mark Zuckerberg in the spring of 2018, when lawmakers looked somewhat silly in their ignorance of SM tools. Policymakers are subject to more pressures and distractions; this is the “attention economy,” the competition for the eyeballs of recipients. It, too, is not new but is new in scale. It is important also to recognize that hybrid threats do also weaponize doubt, because of their very ambiguity and difficulty of attributing. Analysts also have to deal with the thin red line between rigorous critical thinking and borderline paranoia when trust is constantly eroded by outlets dealing in propaganda or sensation, not news. Moreover, policy­makers, like the rest of us, will be drawn toward “sexy” social media and trending narratives, while underinvesting resources and attention in deeper trends. And, as always, politicization, leaks, and spinning intelligence findings by policymakers are threats to the enterprise as a whole and an opening for manipulation by adversaries. The inherent ambiguity, along with the cacophony of messages, may make spinning more likely in the hybrid realm.

Attributing Cyber Attacks

While most elements of hybrid threats are not strikingly new, the digital realm does in fact pose new formidable new challenges. Russia’s interference in the 2016 U.S. election capitalized on the two primary vulnerabilities the digital realm creates—a lowered cost of entry for information operations, and cyber espionage and attack.[11] The two elements were employed for synergy. Russia’s messaging during the election was amplified by a coordinated information operation on social media but also relied on more subtle and nefarious attacks in cyberspace. The public nature of social media information operations creates indicators that allow for detection. Most cyber-attacks, however, are designed to go undetected, or at the very least, shroud the perpetrator behind layers of obscurity. Effective cyber attribution is thus critical for responding to hybrid threats.

As new vulnerabilities are created in cyberspace, new opportunities for detection and response also present themselves. Detecting malicious cyber activity may indicate the early stages of a hybrid opera­tion. But the situation is similar to identifying influence campaigns – the presence of one hybrid tool does not guarantee the use of other hybrid tools. Malicious actors are also vulnerable themselves, as evidenced by Dutch intelligence services compromising the Russian hacking group APT 29 and wit­nessing their hack of the DNC.[12]

Despite warnings from the FBI that their computers had been hacked, the Democratic National Commit­tee did not take the threat seriously for seven months, in part because the warnings had been general and did not name Russia as the source. Once they realized the problem existed, they hired CrowdStrike, a private cybersecurity technology company, rather than the FBI, to investigate the problem.[13] The CrowdStrike employee assigned to the case examined the DNC servers’ code and quickly identified the string of code that did not belong. He had even seen the exact code before from his earlier work with the military’s Cyber Command and thus knew the culprit—APT 29, a hacker group led by Russian intelligence. In this case, the code was the key that allowed CrowdStrike to attribute the intrusion. In others, it may be source data, tactics, or a combination of factors that allow for successful attribution. The first step in all cases of cyber attribution is to pull all technical data on the breach or attack, and identify the nature of the attack, what was accessed or disrupted, and the general sophistication of the attack. Then, the following are common criteria for analyzing digital forensic evidence:[14]

  • Source data. Metadata, such as “source IP addresses, domain names, domain name registration infor­mation, third-party data from sources like Crowdsource or VirusTotal, email addresses, hashes and hosting platforms can help attribution; however, these data points are easy to spoof.
  • Tools, scripts, and programs. Other data points such as phishing packages (files and links that purposely send information back to host when activated), the language of the compiler, programming language, compile time, libraries, patterns, and other signifiers can be found in the attacker’s software.
  • Tactics, techniques, and procedures (TTPs). Perpetrators sometimes have their own “style.” This can range from method of delivery to the way they cover their tracks. Tracking online social media activity in relation to the attack can be useful. So, too, can trying to geotag fake documents or phishing links to isolate real life locations,
  • Trying to get into the attacker’s head. Understanding their goals can provide critical insight. Here, the connection to HUMINT is plain.
  • Understanding business drivers. Knowing what is going on within companies can help predict problems. For example, if a company is preparing to release innovative products, they become a more attrac­tive target. This underscores that, like terrorism, cyber operations cannot easily be divided between “home” and “abroad.” Better understanding of vulnerabilities at home is key to anticipating threats from “abroad.”
  • Geopolitics. This analysis attempts to determine an actor’s identity by placing the actions under the lens of current events, tying a variety of assumptions over stakeholder motivations to the technical foren­sics of a cyberattack. This also begins to move from forensics toward a more strategic under­standing of the threat.

As the attribution case becomes more complex, and the attack more sophisticated, additional measures can be taken to determine identity of the attackers.

While there are many tools for attributing and responding to threats in cyberspace, an experienced actor can make the challenge more difficult. Some approaches to avoiding attribution include:[15]

  • Spoofing source information or forging the sender’s identity
  • Using a “reflector host, who replies to a forged sender and thus really replies to the actual victim, hiding the attacker’s location”
  • Employing other subtle protocol exploits
  • Employing a “laundering” host to transform the data and obscure the source
  • Altering the time of attack can make it difficult to effectively attribute. An attack can be carried out very quickly or over a period of months

Given the challenges of detecting and responding to threats in the cyber realm, effectively countering hybrid threats requires a whole of society response. CrowdStrike’s involvement in responding to the Russian cyber threat is hardly unique. Private companies can and should play an important role in responding to hybrid threats.[16] The U.S. Department of Defense cyber strategy of 2015 notes the private sector’s “significant role in dissuading cyber actors from conducting attacks.”[17] Indeed, the private sector has frequently been involved in attribution.

There remains the big strategic question of whether, and when, nation-state cyber attackers might want their actions attributed to them, as a demonstration of what they can do. The Russian attackers in the 2016 elections either were pretty careless in covering their tracks or didn’t mind the actions being at­tributed to them. And the Russian services used many of the same methods in their attacks during the French elections in 2017. Still, attribution is often ambiguous; as one industry leader noted: “…many private firms and security researchers are quick to reach a conclusion on who is behind an attack based on code and infrastructure re-use, as well as the tactics, techniques, and protocols (TTPs) they have pre­viously ascribed to bad actors with cute names. The methods typically would not pass a court of law’s evidentiary standards but are good enough for Twitter.”[18] Putin’s insistence, despite all the U.S. analysis, that Russia was not behind the 2016 U.S. election hacks suggests a circumstance much like Israeli nuclear weapons: Russia shows what it can do while pretending it isn’t, thus trying to reduce the risk of re­sponses to its actions.

Detecting Social Media-Aided Influence Operations

Social media is often a critical medium for employing, and thus detecting, influence operations and possible hybrid threats. Russia’s election interference in the United States was successful not only because of hacks and well-timed leaks alone; the campaign also relied on Russian media outlets, paid human trolls, and bots to amplify the message. Of course, the presence of an information campaign does not guarantee the adversary has employed, or will employ, other tools, ranging from economic to kinetic. Though a hybrid threat, by the definition employed here, involves the use of multiple tools synchro­nously, it does not require social media-driven influence operations to be one of the tools. Yet the report on hybrid threats cited earlier suggests they are often present.[19] Digital tools have lowered the cost of entry of information campaigns, which played an important role in the hybrid operations case studies examined in that report—Russia’s interference in the 2016 U.S. presidential election, intervention in Crimea and Eastern Ukraine, and influence on the 2017 French elections. Troll accounts, botnets, and ongoing digital influence campaigns may be the proverbial canary in the coal mine for hybrid threats. Fortunately, twitter bots, online trolls, and thus influence operations rely on public posting, which has key indicators and makes identifying this aspect of hybrid operations possible.

Automated accounts differ from human driven accounts in numerous important ways—the degree to which they appear as a real person, level of automation, activity, and purpose—but many send signals that can reveal their nature. There are three identifiers: “time-oriented information (temporal markers), content-oriented information (semantic markers), and social-oriented information (network markers).”[20] Temporal markers are often the simplest way of identifying bots, as the data is the easiest to gather and the indicators are the strongest. An account can be identified as a bot with high degrees of certainty if tweets are sent at a rate unreasonable for human activity or on a specific schedule. Semantic markers require more advanced analysis: can they effectively communicate when messaged at on social media sites and does the content of their posts make consistent sense? Network markers can identify bots if their network connections are primarily with other bots, though this information requires more sophis­ticated tools to gather.

The Atlantic Council’s Digital Forensic Research Lab (DFRLab) suggests that political bots share three qualities: activity, amplification, and anonymity.[21] It also identifies activity, or temporal markers, as the lead characteristic of political bots. The more bots tweet, the more they push their desired message. The rate of tweeting also matters—72 tweets per day over a period of months qualifies as suspicious, 144 per day is highly suspicious, and over 240 tweets per day is “hypertweeting.” Political bot accounts typically have high rate of amplification, or retweets, on a certain political message. Anonymity is decided on a simple criterion: is the account’s profile information too impersonal to be credible?

A team at Indiana University created Botometer, formerly BotOrNot, an automated tool to identify social bots. They propose six criteria for identifying bots: network, user, friends, temporal, content, and senti­ment.[22] Like other similar tools, entering an account will return a score that analyzes how likely it is that the account is automated. Numerous other online tools allow for simple detection or tracking of bot accounts, though their accuracy is not guaranteed. These websites include botcheck.me, which analyzes Twitter accounts to classify them as “high-confidence bot accounts.” The Hamilton68 Project, part of the Alliance for Securing Democracy at the German Marshall Fund of the United States, tracks Russian propaganda “in near real-time,” examining both trends from official Russian accounts and bot or troll accounts linked to influence operations.[23]

Though detecting bot accounts and influence campaigns involves uncertainty, these efforts are important for potentially catching early warning signs of hybrid campaigns. In many cases uncovering a bot-assisted social media campaign will not indicate the use of other hybrid tools. But tracking SM influence operations is a useful practice in and of itself. Reaching out to the private sector for bot detection and trend tracking provides many more “eyes” and is critical in developing a whole of society response to hybrid threats.

The possibilities are suggested by the 2016 U.S. elections case. While the SM-aided propaganda campaign surprised the United States, it should not have, for there was warning but from an unfamiliar quarter. A group of outside analysts had been tracking the online dimensions of the jihadists and the Syrian civil war when they came upon interesting anomalies, as early as 2014. When experts criticized the Assad regime online, they were immediately attacked by armies of trolls on Facebook and Twitter. Unrolling the network of the trolls revealed they were a new version of “honeypots,” presenting themselves as attractive young women eager to discuss issues with Americans, especially those involved in national security. The analysts made the connection to Russia but found it impossible, that early, to get anyone in the American government to listen, given the crises competing for attention.[24] Yet the case drives home the point that governments and their intelligence service can draw on lots of help from citizens who are actively monitoring SM for their own reasons. And governments do not have to do much reaching; simply being open to listening may be enough.

Taking Advantage of Opportunities

Opportunities for intelligence lie in new media, new networks, and new partnerships. However, the cul­ture of intelligence is slow to adapt; in one study of social media in intelligence a few years ago, NSA analysts reported getting the question from colleagues: “what’s a hashtag?”[25] The private citizens looking at jihadi websites in 2014 who found evidence of Russian fakery drive home the possibilities of “crowd sourcing” around the world, seeking partners in identifying fake news and planted posts. Alas, this kind of openness and reach to private sector runs very much against the grain of intelligence cultures. Cyber is another great opportunity. In the short run, private actors upset the paradigm of Intelligence attrib­uting in secret for policy to take decisions. Now, private companies are doing attribution too and will go public when it suits them. Yet in the longer run, those companies are a great opportunity for partners, and when they go public it might even ease the “sources and methods” problem for intelligence.

Traditional intelligence collectors will play their roles in new circumstances; exactly how remains an open question. SIGINT, for instance, now uses social media mostly for targeting traditional collection, especially against terrorists: “terrorists may have good OPSEC but they also have children, and so when I find an email…” HUMINT can be critical but will be pushed into a much broader arena, and find itself collaborating with new partners, including some outside government. HUMINT is probably more critical than ever but no easier. To the extent the targets are foreign, especially Russian intelligence services, they are at least known, and perhaps somewhat “softer” than Al Qaeda.

Penetrating Russian hacker groups, like the Internet Research Agency, would be valuable in the usual ways, providing indications of Russian targets and methods. One of the great successes of U.S. and fellow intelligence services has been following the “money trail” of terrorists or drug traffickers. It is a question whether and to what extent virtual currencies, like bitcoin, will make that trail harder to follow as, for instance, hybrid threateners fund parties and propaganda in other countries. So far, the effect seems small, but that may be because the currencies have been used more as investments than as media of exchange.

SM are a great source of intelligence—and warning: as one analyst from the U.S. Defense Intelligence Agency put it in identifying Russia soldiers in Ukraine, “selfies are our best friend.” As in that case, cell phones may be geolocated, or the location may be inferred from analysis of the selfie—opening an entire new source for GEOINT. So, too, ubiquitous cameras offer GEOINT new opportunities for identifying people and their movements.

Collection will also require new forms of collaboration between HUMINT and SIGINT, one suggested by the increasing practice of human-aided SIGINT. As microwave transmissions gave way to fiber optics in the 1990s, signals no longer could be gobbled up wholesale by satellites. As encryption became un­breakable, the best ways to intercept signals were before they were encrypted, and that meant getting very close to the signaler. These developments drove a closer partnership between clandestine and SIGINT services.

What is still, slightly weirdly, called “open source” is very much a work in progress, especially in the United States. It is tempted to show its worth on “hard” targets, like proliferation, which probably are not its comparative advantage, and the U.S. Open Source Enterprise (OSE) has returned to the CIA, rather than the inter-agency auspices of the director of national Intelligence (DNI). It tends to regard SM as just the latest media to exploit, and it goes about validation in a fairly traditional way, looking at location and numbers of retweets, for instance. Ideally, it would become the focal point for matters virtual but unclassified across the entire government, in particular pushing the AI needed to cope with ubiquitous data.

Hybrid threats will reconfigure counterintelligence. After all, preventing foreign powers from hacking into computers or manipulating public opinion would seem the essence of counterintelligence. The awk­wardness, though, is that formulation dramatically expands the institutions to be protected to include both infrastructure and virtual providers that are both in the private sector. As vulnerabilities drive adversaries’ targeting, understanding possible target spaces becomes key to channeling resources—online and off. That in turn will require building synergies inside and outside the government, doing red teaming, and developing fragility indicators and heuristics for potential attack spaces.

An open question for counterintelligence is what role there might be for taking the offensive. In principle, the Western countries could seek to sow conspiracy and doubt in Russia’s intelligence cycles. The tactic would play off their desire to please Putin’s worldview. The goal would be to widen the chasms between Russian intelligence services, playing them off each other and draining their limited resources, much as Russia seeks to exacerbate social divisions in the Western countries. If the offensive required covert inser­tion of misinformation, though, it would risk descending to Putin’s level, discrediting both facts and media that seek them—thus making truth still more relative.

Lessons for Intelligence and Warfighters

  • Recognize that this is war by other means. Cyber and virtual conflict are the wars of the future, society against society.
  • Intelligence is best done as a “whole of society” enterprise, with lots of, in effect, crowd sourcing in both the cyber and virtual realms. Warfighters plainly have a role to play, especially at the end of the spectrum where hybrid shades into kinetic. But the military shouldn’t be the dominant element.
  • Just as hybrid techniques blur the lines between combatants and citizens, they are another reason companies and citizens should be wary of cyber threats and manipulated information on social media. Like the rest of us, the U.S. DNC was, in 2016, more interested in getting its work done than in pro­tecting its networks. Its mistake made it easier for the Russians to hack into the emails of John Podesta, Hillary Clinton’s campaign manager.
  • The question of retaliation as an element of deterrence is a complicated one. Surely, there is a growing market of companies offering private companies advice about how to retaliate, or “hack back.” And the Macron campaign’s flooding of phishing emails is suggestive.
  • For countries, the guidance begins with Hippocrates—do no harm. That probably applies especially to the United States, given its dependence on the virtual realm, hence vulnerability. More often than is comfortable, it may have to emulate Lyndon Johnson’s line about the mule in the rain, just standing there and taking it. Prevention and defense, then remediation and attribution are critical. Retaliation will most often take the form of naming and shaming, perhaps accompanied by indictments of foreign perpetrators who aren’t likely to be extradited.
  • In any case, the great strength of the western democracies is their free media, and so the last thing they should want to do in retaliation is emulate Putin in ways that compromise or discredit those media engaged in telling true news, not fake.

[1] The full range was on display in the Russian intervention in Ukraine. For that case and the background for this piece see Gregory F. Treverton and others, Addressing Hybrid Threats, Swedish National Defence University Center for Asymmetric Threat Studies, April 2018, available at fhs.se/download/18.1ee9003b162cad2caa5a384d/1525791813728/Addressing%20Hybrid%20Threats.pdf

[2] Andrew Thvedt provided invaluable research assistances in preparing this paper, and I thank him.

[3] For my discussion of these categories, see Gregory F. Treverton, “Risks and Riddles,” Smithsonian, June 2007.

[4] See Patrick Cullen, “Hybrid Threats as a “Wicked Problem for Early Warning,” Hybrid COE Strategic Analysis May 2018, avail­able at hybridcoe.fi/publications/strategic-analysis-may-2018-hybrid-threats-new-wicked-problem-early-warning/.

[5] Levi Boxell, Matthew Gentzkow, Jesse M. Shapiro, “Is the Internet Causing Polarization? Evidence from Demographics,” National Bureau of Economic Research, 2017, http://www.nber.org/papers/w23258.

[6] Soroush Vosoughi, Deb Roy, Sinan Aral, “The Spread of True and False News Online,” Science, 359, 1146–1151 (2018) 9 March 2018.

[7] See his careful parsing in Harry Frankfurt, “On Bullshit,” available at www5.csudh.edu/ccauthen/576f12/frankfurt__harry_-_on_bullshit.pdf.

[8] This and subsequent references to the case of the 2016 U.S. elections and 2017 French elections are from Treverton and others, cited above.

[9] U.S. Intelligence Community Assessment (ICA), an unclassified version of which was made public in January 2017, available at dni.gov/files/documents/ICA_2017_01.pdf.

[10] As reported in Farhad Manjoo, “Reality TV, As Produced in U.S. by Russia,” New York Times (international edition), November 10, 2017, 7.

[11] For the case, see Treverton and others, cited above. See also James Andrew Lewis, “Rethinking Cybersecurity: Strategy, Mass Effect, and States,” Center for Strategic and International Studies, January 2018, available at csis-prod.s3.amazonaws.com/s3fs-public/publication/180108_Lewis_ReconsideringCybersecurity_Web.pdf. “We can begin to approach the problem of cybersecurity by defining attack. While public usage calls every malicious action in cyberspace an attack, it is more accurate to define attacks as those actions using cyber techniques or tools for violence or coercion to achieve political effect. This places espionage and crime in a separate discussion (while noting that some states use crime for political ends and rampant espionage creates a deep sense of concern among states.)”

[12] “Dutch agencies provide crucial intel about Russia’s interference in US-elections,” deVolkskrant, January 25, 2018, volkskrant.nl/media/dutch-agencies-provide-crucial-intel-about-russia-s-interference-in-us-elections~a4561913/.

[13] Jason Leopold, “He Solved the DNC Hack. Now He’s Telling His Story for the First Time,” Buzzfeed, November 8, 2017, buzzfeed.com/jasonleopold/he-solved-the-dnc-hack-now-hes-telling-his-story-for-the.

[14] These six method bullets draw on Justin Harvey, “The shadowy—and vital—role attribution plays in cybersecurity,” Accenture, May 4, 2017, accenture.com/us-en/blogs/blogs-shadowy-vital-role-attribution-cybersecurity.

[15] Ibid, 82.

[16] Examples include “FireEye’s report, “APT28: A Window Into Russia’s Cyber Espionage Operations,” indicating Russian in­volvement in a variety of espionage activities against private sector and government actors”; Novetta’s report, “Operation SNM: Axiom Threat Actor Group Report,” indicating Chinese government involvement in cyber espionage against a variety of private companies, governments, journalists, and pro-democracy groups”; and “CrowdStrike’s report, “CrowdStrike Intelligence Report: Putter Panda 64,” identifying Unit 61486 in the Chinese PLA as being responsible for the cyber-enabled theft of corporate trade secrets primarily relating to the satellite, aerospace, and communication industries.” Herbert Lin, “Attribution of Malicious Cyber Incidents,” Hoover Working Group on National Security, Technology, and Law, Aegis Series Paper No. 1607 (September 26, 2016), available at lawfareblog.com/attribution-malicious-cyber-incidents.

[17] Ibid, 27.

[18] Anup Ghosh, “Playing the Blame Game: Breaking down Cybersecurity Attribution,” Help Net Security, 19 December 2016, available at helpnetsecurity.com/2016/12/19/cybersecurity-attribution-blame-game/.

[19] Treverton and others, cited above.

[20] The Computational Propaganda Project, “Resource for Understanding Political Bots,” November 18, 2016, available at comprop.oii.ox.ac.uk/research/public-scholarship/resource-for-understanding-political-bots/.

[21] Ben Nimmo, “Human, Bot or Cyborg? Three clues that can tell you if a Twitter user is fake,” Digital Forensic Research Lab, December 23, 2016, medium.com/@DFRLab/human-bot-or-cyborg-41273cdb1e17.

[22] Clayton A. Davis, Onur Varol, Emilio Ferrara, Alessandro Flamini, and Filippio Menczer, “BotOrNot: A System to Evaluate Social Bots,” Proceedings of the 25th International Conference Companion on World Wide Web (pp. 273-274). 2016.

[23] “Hamilton 68, Securing democracy against foreign influence,” dashboard.securingdemocracy.org/about.

[24] See Andrew Weisburd and Clint Watts, “Trolling for Trump: How Russia is Trying to Destroy Our Democracy, November 2016, available at warontherocks.com/2016/11/trolling-for-trump-how-russia-is-trying-to-destroy-our-democracy/.

[25] The study was, alas, classified. For an unclassified version of part of it, see Gregory F. Treverton, New Tools for Collaboration: The Experience of the U.S. Intelligence Community, Center for Strategic and International Studies, January 2016, available at csis.org/analysis/new-tools-collaboration.

Edited by Dick Eassom, CF APMP Fellow
Published on November 1, 2018, by SMA, Inc.