Deep Fake, Twitter Hacked: Welcome to Disinformation on Steroids. An Interview with tech expert Branka Marijan

Q: The July Twitter hack has been confirmed as the largest in the social media company’s 14-year history. It is also being called just the tip of a very large iceberg with vast security implications. What to you is most concerning about this attack from an electoral fraud and public trust perspective?

Branka Marijan: Around the time that the twitter attack happened a really interesting study was released from Kings College London by Heather Williams and Alexi Drew called “Escalation by Tweet: Managing Nuclear Diplomacy”. Essentially, the key points of the report were about how Twitter, due to its global reach, informality, and immediacy, could contribute to escalations of conflict because it can create a perfect storm in terms of amplifying misunderstandings in real-time. From their perspective, that has and will continue to have profound security implications. As rightly pointed out, there are security implications for public trust and electoral interference, disinformation campaigns, etc. Twitter is a platform that we need to understand as security researchers. From governments, civil society, and academics, we need to understand how this social media platform is allowing for misinformation to spread and how its shaping politics. It really is permeating our political lives.

We need to look at the possibility of these platforms undermining the public trust in official institutions and accounts, which can have real consequences. We are in a global pandemic at the moment and if people are not trusting their health authorities or are getting their information elsewhere, which could be altered or have malicious actors behind it, that is really profound in terms of our overall security. Ultimately, we need to rethink the way we understand how these new platforms are impacting global politics, security, and how we think about engaging citizens.

The recent Twitter hack is a flag being raised, and we should be concerned. Imagine if whoever had hacked these accounts were able to engage with two nuclear state actors. We need to be cognizant of the possibility of this and pay attention to what’s happening in the social media world. It is easy to dismiss it as something less serious, but at the same time, we know that it has been used maliciously before. The Israeli Security Forces, for example, were among the first to demonstrate how social media could be used by militaries. This is the reality that we need to be aware of.

 

Q: With the improvements in Deep Fake technology and everything that comes with it, what keeps you up at night? 

Branka Marijan: The ability to alter videos and images and make it look like the real thing is really concerning. You can still kind of tell when an image or video has been altered, but that technology is constantly being improved upon. Just because we can tell what is fake right now, doesn’t mean that even in a few years we will have the same ability to spot these fakes. That is what really concerns me – images or videos being altered that could potentially spread misinformation or target communities, whether ethnic, racial, or religious.

 

Q: Vetting social media platforms for national security is a double-edged sword. On the one end of the spectrum, you have China which monitors its citizen online activity and has built the so-called “Great Firewall of China” to isolate itself from foreign influence. On the other hand, you have a much freer and decentralized concept of the internet, built on liberal ideas, but potentially less secure as a result. How should we balance security and freedom on social media platforms?  

Branka Marijan: It certainly is more challenging for democratic societies to grapple with these questions. If you’re an authoritarian regime the public doesn’t have the same level of expectation in terms of respect for citizen’s rights, freedom of speech, and freedom to protest. They can really clamp down on free speech in a way that democratic societies cannot, and they do every day. What I think is important for democratic societies is to not fall down a slippery slope and be careful to preserve the kinds of values and societies that are worth preserving. It’s a challenge because you need to be able to respond to certain kinds of content and be able to understand the impacts of your national security while walking this fine line of respecting the rights of citizens.

There are responses that may initially not seem that problematic. For example, there was a suggestion from various policy analysts that the government should influence opinions on the global pandemic by correcting information and being more responsive to misinformation. There was backlash to this from leadership because they recognized that that meant influencing and impacting discourse within the Canadian population. This is problematic and is not what democratic societies would expect from their governments. We want full transparency from our institutions, and in particular our security and defence related institutions. We must be cognizant in ensuring that the responses fit within our broader values. I will be the first to admit that this is very difficult, but there are tools and policy responses at the disposal of democratic governments to ensure that they can respond in ways that respect their citizen’s rights.

 

Q: Are Facebook, Twitter, and other big players doing enough to protect viewers from disinformation? Do we have legal recourse to demand more of them? What does President Trump’s recent executive order targeted at social media mean to these companies and consumers of the platforms?

Branka Marijan: With regards to the last point on the executive order, I think some of that is a bit of showmanship or political play targeting specific demographics and supporters of the administration to sort of show that something is being done in response to certain segments of the media. What we ultimately need are steps that are not of showmanship, but rather practical, doable things. I do believe that these social media companies can do more and need to do more. Take YouTube, for example. If you watch a video on some sort of conspiracy theory, you will likely be recommended several other conspiracy themed videos, and very soon you’ll be led down a rabbit hole which can lead to some extremist content. What YouTube can do is change some of the algorithms that are making these recommendations and be transparent about them. Many studies have shown how social media platforms can propagate extremist content and contribute to broader extremist ideology and dis/misinformation being spread. Do we have legal recourse? – absolutely. We may need to update certain policies whether that’s privacy documentation in Canada or other countries.

Another thing to remember is that these social media companies have a global reach, and therefore they have a responsibility to their global community. We can think of the example of Myanmar and the situation with the Rohingya crisis. Facebook was used to call for violence against the Rohingya population. At the time there was only one Facebook moderator in that region who spoke the local language out of a group of 10. At the very least, these companies should try to train/hire individuals with the skills needed to better understand the regional dynamics.

 

Q: There is a dark new reality of our increasingly connected world. It’s not all kittens and cupcakes. How do we prevent the spread of these misinformation campaigns and enhance citizen resiliency in this context?

Branka Marijan: This is the question to be asking. There is only so much that these social media companies and governments can do. It shouldn’t be only the user or consumer that has to constantly be aware of how their information is being tracked or used but we do require a population that has a great skill, and that skill is digital literacy. We really need to be teaching this in schools and have more discussions on these topics. We need the population to understand why certain information is being shared and how it is being shared. A certain segment of the population doesn’t understand the extent to which malicious actors can exploit their sentiments and their feelings. There is a whole psychology behind these activities. Sometimes states are the ones targeting populations, and sometimes its non-state actors with interests tied to a state, and sometimes it’s a purely non-state actor that is vested in spreading dis/misinformation. We have studied and looked at how propaganda spreads within ethnic conflicts and within the Cold War period. What’s new about these mediums that we are talking about is information can be spread very quickly. That immediacy was not possible before.

Part of digital literacy which is so critical is understanding the psychology behind these dis/misinformation campaigns. This means understanding some of the grievances or issues that these campaigns tend to focus on such as identity – exploiting whatever real or perceived divisions exist in a country, whether they be ethnic, racial, or religious. Information gaslighting is also important – the flood of misinformation constantly. There is also incidental exposure where the aim is to increase everyday exposure to false content. All of this has specific goals and aims which are to distract, divide, and create distrust in official institutions and authorities. People really need this kind of information so that they are aware and able to ask themselves why certain information is so appealing to them and why it speaks to their world view. They need to understand that there is a degree of psychology and engineering happening in the background by individuals who want to influence their options. The difficulty is that these fishers or divides in society exist in all countries and they will be ceased upon by these actors. States, academics, researchers, analysis, civil society, etc. need to start looking at ways to inform the public about the fact that this is going on. Citizens need to be able to open their social media feed and think critically about what they are seeing and understand that there may be actors behind it. Technical fixes can also be done in terms of the algorithms, clearing bots as much as possible, and policy fixes to respond to social media explain to individuals why certain things are happening. That kind of conversation is what will ultimately lead to a more resilient society.

Regardless of where you stand politically, there is a need to recognize that various actors are constantly influencing our options, our electoral preferences, and our lives. Because of these platforms and new technologies, the malicious intent of these actors is being amplified. I would like to see much more digital literacy and a broader discussion on this.

 

Heather Williams and Alexi Drew, (2020), “ Escalation by Tweet: Managing Nuclear Diplomacy,” Centre for Science and Security Studies. https://www.kcl.ac.uk/csss/assets/10957%E2%80%A2twitterconflictreport-15july.pdf


Dr. Branka Marijan 
is a Senior Researcher with Project Ploughshares.

Branka leads the research on the military and security implications of emerging technologies. Her work examines ethical concerns regarding the development of autonomous weapons systems and the impact of artificial intelligence and robotics on security provision and trends in warfare. She holds a PhD from the Balsillie School of International Affairs with a specialization in conflict and security. She has conducted research on post-conflict societies and published academic articles and reports on the impacts of conflict on civilians and diverse issues of security governance, including security sector reform.

 

 

 

 

Share the article :

Do you want to respond to this piece?

Submit and article. Find out how, here:

Cookies

In order to personalize your user experience, CDA Institute uses strictly necessary cookies and similar technologies to operate this site. See details here.