Daniel Kimmage: Countering Disinformation through Resilient Information Ecosystem, Partnerships

When it comes to countering disinformation, Daniel Kimmage is big on partnerships that help to foster a more resilient information ecosystem while strengthening democratic institutions. As the Principal Deputy Coordinator for the Global Engagement Center at the U.S. Department of State, his recent visit to Nigeria as a speaker on the Global Inclusivity and AI: Africa conference co-hosted by the U.S. and Nigerian governments, he exclusively fielded questions from Chiemelie Ezeobi on how he coordinates efforts in countering disinformation, the framework used, how AI is changing the landscape of international communication and diplomacy, role of media in countering disinformation, amongst other issues 

Can you give me a brief rundown on what the Global Inclusivity and AI: Africa conference is all about?

 So we’re here at this conference because our mission is to counter disinformation globally and, specifically, foreign information manipulation outside of the United States. So artificial intelligence is one of the biggest developments in this field because technology is so central to how disinformation is spread, how we identify it and how we counter it, that we need to keep track of all the latest technologies. Artificial intelligence is a big part of that, and the reason I’m here this is my 4th trip to Africa in the last year.

We’re very engaged specifically in identifying technological innovation in Africa that can help to fight disinformation. So this is a way for us to meet some of the, leading thinkers, to hear African voices and perspectives on artificial intelligence, and hopefully, to meet people who could potentially serve as partners. So for us, this is a very exciting opportunity.

So how does it align with your job as the Principal Deputy Coordinator for the Global Engagement Center at the U.S. Department of State?

So as I said, technology is a huge part of what we do. And, one of the things that we, do at the Global Engagement Center is we conduct, for example, tech challenges. Tech challenges are where we invite, applications from tech innovators. We go through them, we select seven or eight participants, and then we invite them to a day-long or a two-day event.

At the end of that event, we award $250,000 to develop tools. We’ve conducted a, tech challenge in Cote d’Ivoire in October of last year. We are, planning to conduct one next week in Tanzania. That’s one of our initiatives. It’s not the only way we engage with technology.

One of the other ways is to come to events like this where we can hear the perspectives of innovators and leaders in the field, where we can meet with them. Because where disinformation spreads is on our phones, in our pockets, and it’s determined by the technology. And so we wanna be in touch with the leading voices who understand those tools, who are developing the next generation of tools. Because just as artificial intelligence can accelerate the spread of disinformation, it can also put powerful tools in the hands of the people who are fighting disinformation.

So beyond the tech challenges you have, how else do you fight this disinformation?

Sure. So, I would say partnerships are really at the core of what we do. And if I had to put it sort of in one sentence, I would say it is collaborative partnerships informed by common principles to achieve positive outcomes. So, we work with fact checking organisations. We work with, tech innovators and analysts.

And, I already talked about, our tech challenges. We want to, encourage analysis that identifies the sources of disinformation. We want to work with some of the people who are promoting digital and media literacy.  I can’t stress enough how, technology is central to what we do, and our main vehicle, our main way of accessing technology is through partnerships.

As part of my IVLP Countering insurgency through media messaging, we went to the terror cell in Denver and one of the things that struck me most was how it was easy for them to capture or create messages that could trigger people or influence them to join their terror groups. What do you think the good guys can do to get the right messages down to people that need it?

You’re talking about the messages that can can can sort of bring people a different perspective? I think what it involves is is identifying kind of the right local partners, because what you need are people who have the language skills, the cultural skills, the understanding of the environment, because whether we’re talking about recruitment to an extremist organisation, or disinformation spreading in a market that is undermining social cohesion, it’s always driven by local dynamics, and we can’t sit in Washington and counter it from there. We need to work with the people who have the understanding of the local environment. So once again, it it all comes down to the partnerships with the right people, the right organisations, who have the right tools to do the good work of countering the actual disinformation.

 Do you think it would be much easier for telecoms company to help to block access to all these, terrorists propaganda online if you partner them? Or you just partner tech?

So let me be very clear that we’re not a regulatory agency. We’re not engaged in blocking content. Our approach is primarily, first, analytical. We want to understand what’s out there in the environment, how it’s spreading, and then we really want to work with local partners who can counter it. So they can put correct information out there that can help people to become more informed consumers. What what we believe on the basis of our research and our approach is that free access to accurate information is the best antidote, the best counter to disinformation.

At the centre, you coordinate efforts in countering disinformation. How is AI changing the landscape of international communication and diplomacy?

That’s a huge question. It’s changing it in pretty much every way in the same way that it’s revolutionising other fields. The  the GEC has got let me sort of give two examples of of how we’re trying to work with this. One of our most powerful tools- we’re diplomats, we’re at the state department, and we have a diplomatic tool here. This is our framework to counter foreign state information manipulation.

This framework lays out five areas where countries can make themselves more resilient to information manipulation. So the first is national strategies. The second is institutions. The third area is human and technical capacity. The fourth area is independent media, academia, and civil society, and then the last area is international engagement.

That framework has room for working on issues like the responsible use of artificial intelligence, or sort of safe, secure, and trustworthy AI is is is our approach in the US government. So the framework is our primary tool. 22 countries have endorsed this framework so far. I’m happy to say that, Cote D’ivoire is the first African country to endorse the framework in April of this year, and we are, hoping to secure additional endorsements of this framework as a way to work together with other countries in a collaborative partnership to make ourselves more resilient to foreign information manipulation.

Are you saying this framework is is the key to countering disinformation online?

It’s not the cure all for it, but for us and our work with other governments, it’s a powerful tool to identify areas where we can improve, and this includes the United States, where we’re struggling with this and and grappling with this problem, like other countries. So we feel in in the in the diplomatic sphere, the government to government sphere, that the framework is a sphere, that the framework is a powerful tool. We have other tools for partnerships in the tech sphere, for example, to work with innovators like our tech challenges, but we’re very enthusiastic about the framework in this government to government sphere. 

While we have the positives of AI, we also have the negatives. How is your centre promoting the safe and sound AI global? 

We want AI to be a force for good in the world, and this is the whole logic behind the promotion of safe, secure, and trustworthy AI. And trustworthy is really key in the  counter disinformation space because people have to have confidence that the information they’re receiving is credible, that they can trust the source. One of the areas where we’re working is in, you know, what’s called the authentication of content. So we’re working with some of our other colleagues in the US government to develop standards to authenticate the content that we put out, so government content, so that people know they’re getting real information from their government. And then we’re going  to use that to do international engagement. I would identify also content authentication as one of the areas where we can promote the, safe, secure, and trustworthy use of artificial intelligence.

With your background in Russia and Central Asian issues, how do you view the role of AI in shaping political dynamics, particularly in areas where the US has strategic interest?

So, as everywhere else, artificial intelligence is a factor. Technology is a factor. Broadly, what I would say is that every country has the right to make its case to the world and to other countries, but that case has to stand or fall on its merits. And on its merits means, you know, without information manipulation, including manipulation that relies on AI tools. We know that AI can be used to to produce fakes.

It can be used to to promote things online. So we are against that, and and and we’ve exposed, certain campaigns. We, as I said, fully accept that each country can and should make its case to the world, but that case must stand or fall on its own merits.

I’m glad you mentioned the media when you talked about your framework, talked about independent media. What role do you think the media can play in countering disinformation?

A lot! I think the media is probably the most important or one of the most important tools out there because investigative journalists, data driven journalists can look at the environment, they can identify coordinated inauthentic behavior, as the tech companies, call it, or disinformation campaigns, as we often call it, information manipulation. And then they can present that to readers in an independent and credible and authoritative way. So we are, strong supporters of the role of independent media. Some of the best work on, information manipulation has been done by, journalists and media organisations, and we look at them as, partners in this, and as, I said, probably the most credible and important tool.

Government has a role, but we’re aware that government communicators are, you know, unless we’re talking about policy, we’re often not the most, credible communicators with, some communities. So media has an enormously important, positive role to play here.

 For me, I cover the security beat and one of the challenges we face is how do you draw the balance between telling people what they should know and selling propaganda for terrorists? Where do you think the balance should come in?

Well, you know, we’re obviously on the side of telling people what they should know, and we’re, opposed to propaganda for terrorists. The balance, I think, lies in informed and responsible consumption of  information. And this is people who are receiving information that they can trust from people who they can trust. So terrorist organisations use lies, they use representations, they use every form of information manipulation to bring support to their side. There are obviously many tools in the counterterrorism toolbox that are part of law enforcement and intelligence and all of those things, but there is a part of it that is also in the information space. And some of this goes by the term of countering violent extremism, where often you will sometimes bring the perspectives of people who were brought into a terrorist organisation but left it, sort of former members who can explain how they were manipulated, how they were lied to, and they can speak with credibility to the lies that terrorist organisations use. But, you know, broadly speaking, we strongly support the use of responsible communications to counter the lies that extremists use to recruit.

So how do you think nations can tailor their own national security strategy to key into the framework you talked about earlier?

So I think that, you know, of course, each nation has to set its own strategy and has its own unique challenges to confront, but, you know, some of these areas like, the promotion of, as I said, you know, safe, secure, and trustworthy artificial intelligence, The development and promotion of authentication standards. Some of these areas are emerging as, things that governments can work hard to develop, work hard with industry to develop, and work hard with each other to develop. And to to circle back to what I was, discussing before, this is why we’re very optimistic about our framework. We recognise that each country has to do this in its own way. But as we learn lessons, as we make progress, we really, are eager to collaborate with like minded countries confronting the threat of information manipulation to make ourselves all more secure in the face of this threat. Well as a diplomat, I’m not going to jump ahead on  negotiations we conduct. I’m very pleased that 22 countries have endorsed the framework, including, Cote D’ivoire in Africa, and we are speaking with others and are optimistic about the future.

What is the future of AI when it comes to handling security issues?

If I think if I knew the future of AI, I would actually be able to solve the problem of disinformation. I think that the future of AI is obviously, going to confront us with enormous, challenges in that it is going to change every aspect of the way that we engage with information. Some of this is going to be positive, some of it is going to be frightening. What I think is encouraging is that at venues like this conference, we have innovative, engaged leaders very focused on both the promise and the pitfall of artificial intelligence.

They’re speaking with each other, they’re exchanging lessons. That doesn’t mean that we know the future, it doesn’t mean that we have a guaranteed solution to problems, but this is for me a very eye opening two days. I’ve learned a lot. I’ve engaged with some truly insightful experts in the Nigerian government, in Nigerian technology, civil society, innovators from, the broader region. And I think that this is not going to tell us what the future brings, but this is the way for us to meet the future together, in dialogue, and with an eye to innovation.

About Daniel Kimmage 

 At the State Department, Daniel leads efforts to coordinate and synchronise U.S. government communications efforts designed to counter terrorist recruitment and state-sponsored propaganda and disinformation. 

Kimmage’s public service career includes serving in a number of State Department positions, including covering counterterrorism issues for the Office of Policy Planning and as the Principal Deputy Coordinator of the Centre for Strategic Counterterrorism Communications. 

From 2008 to 2010, Mr. Kimmage was an independent consultant and Senior Fellow at the Homeland Security Policy Institute. Prior to that, from 2003 to 2008, he was a regional analyst at Radio Free Europe/Radio Liberty, where he focused on Russian and Central Asian issues.

 His published reports on extremist media strategies include Iraqi Insurgent Media: The War of Images and Ideas (co-author, 2007), The Al-Qaeda Media Nexus (author, 2008), and Al-Qaeda Central and the Internet (author, 2010). 

He has also written extensively on Russian politics, including the chapter Selective Capitalism and Kleptocracy in the collection Undermining Democracy: 21st Century Authoritarians (2009). 

Mr. Kimmage received his undergraduate education at the State University of New York at Binghamton, and earned an M.A. in Russian and Islamic history from Cornell University. He is fluent in Russian and Arabic.

Quote 

We want to, encourage analysis that identifies the sources of disinformation. We want to work with some of the people who are promoting digital and media literacy.  I can’t stress enough how, technology is central to what we do, and our main vehicle, our main way of accessing technology is through partnerships.

Related Articles