<![CDATA[C4ISRNet]]>https://www.c4isrnet.comTue, 23 Jul 2024 09:33:07 +0000en1hourly1<![CDATA[Quieting Discord: A new frontier in military leaks and extremism]]>https://www.c4isrnet.com/news/your-military/2024/07/10/quieting-discord-a-new-frontier-in-military-leaks-and-extremism/https://www.c4isrnet.com/news/your-military/2024/07/10/quieting-discord-a-new-frontier-in-military-leaks-and-extremism/Wed, 10 Jul 2024 16:38:16 +0000During a five-month period from 2022 to 2023, Massachusetts Air National Guard member Jack Teixeira sent 40,000 messages on the online chat platform Discord, some of which contained classified national security secrets.

An FBI investigation revealed that Teixeira, a 22-year-old who ran a server on Discord called “Thug Shaker Central,” spent much of his life online, talking primarily with other young men via message, video calls and voice chats. He chatted about guns and military gear, threatened his school, made racist and antisemitic jokes, traded conspiracy theories, discussed antigovernment sentiments, and in a bid to show off, shared some of the military’s most closely guarded secrets about the Russia-Ukraine war and the Middle East.

By the time the young airman was arrested in 2023, media scholar PS Berge had been studying Discord and its users for three years and had created an online consortium of other academic researchers who were doing the same. That an intelligence leak occurred on the site, creating a national security incident, didn’t come as a shock to her.

“My response was, ‘Of course. Of course this would happen on Discord,’” Berge said. “Because on a platform like this, you share everything with your people. Everything about your life. So, why not share national security secrets?”

Teixeira pleaded guilty in March to six counts of willful retention and transmission of national defense information. His sentencing is scheduled for September, and prosecutors are asking that he serve between 11 and 17 years in prison.

Massachusetts Air National Guardsman Jack Teixeira, right, in U.S. District Court in Boston, Friday, April 14, 2023. (Margaret Small via AP)

The same month Teixeira agreed to a plea deal, the FBI revealed it had investigated another service member in 2022 for leaking information on Discord.

Former Air Force Staff Sgt. Jason Gray, who served as a cyber analyst at Joint Base Elmendorf-Richardson, Alaska, admitted to running a Facebook group for followers of Boogaloo, a loosely organized, antigovernment movement that advocates for a second Civil War. Gray was disgruntled with his military career, and he discussed his dissatisfaction with the U.S. government in several Discord channels created for the Boogaloo movement, according to a 2022 FBI affidavit that was unsealed in March.

Gray, who used the account name LazyAirmen#7460, was accused of posting a classified image in a private Discord channel that he “likely obtained” from his access to National Security Agency intelligence, the affidavit states.

Investigators said the image could’ve been shared “in furtherance of the Boogaloo ideology,” but didn’t elaborate on the image’s details. It’s uncertain whether the FBI is still investigating the potential leak. But while searching Gray’s electronic devices for evidence of an intelligence breach, authorities discovered hundreds of images of child pornography. Gray is currently serving five years in federal prison on multiple child pornography charges.

Oversharing is a hallmark of Discord, an online world where members of certain channels talk all day, every day, and even fall asleep together on voice calls, said Megan Squire, a computer scientist and deputy director for data analytics at the Southern Poverty Law Center.

People who study the platform agree that it’s not inherently bad — it’s used by millions of gamers, students, teachers, professionals, hobbyists and members of the military community to communicate and socialize. However, extremists have hijacked a part of the platform to radicalize and recruit others to their causes, said Jakob Guhl, senior manager for policy and research at the Institute for Strategic Dialogue.

Following the leak of national security secrets and other high-profile, nefarious uses of the platform in recent years, researchers are grappling with what to think of the platform’s small but headline-grabbing dark side, and many disagree on whether Discord as a company is doing enough to root out bad actors.

“It’s always a bit difficult to strike the right tone between not scaring people off the platform, because the majority of users are completely fine, but also highlighting that there is an actual issue of radicalization,” Guhl said. “It’s not the biggest or most offending platform, but it definitely plays a crucial role among this network.”

Many service members and veterans join Discord communities looking for camaraderie. (Staff. Sgt. Jaccob Hearn/Army via Canva)

‘Not inherently evil’

The National Consortium for the Study of Terrorism and Responses to Terrorism, known as START, studied decades of violent extremist attacks and found a military background to be the most commonly shared characteristic among those who committed or plotted mass casualty attacks from 1990 through 2022, more so than criminal histories or mental health problems.

Researchers from START said the study revealed why extremist groups tend to focus recruitment efforts toward people with military service records: Even a small number of them can have an outsized impact inside extremist movements.

While such recruitment occurs on Discord, Guhl, Berge and Squire agreed that the mere presence of service members and veterans on the platform isn’t a cause for concern.

“It’s a popular platform and not inherently evil,” Squire said. “I’d be much more concerned about military folks on 4chan, Telegram, places like that. Nothing good is happening on those platforms, but Discord could be useful.”

In fact, Berge said, it can be a valuable forum for marginalized people to foster a sense of community. On its “about” page, Discord describes its mission as one that helps users find a sense of belonging.

“Discord is about giving people the power to create space to find belonging in their lives,” the company’s mission statement reads. “We want to make it easier for you to talk regularly with the people you care about. We want you to build genuine relationships with your friends and communities close to home or around the world.”

That’s what the veterans group Frost Call is doing on the platform. The nonprofit encourages veterans and service members to stay connected through gaming, one of its founders told Military Times last year. As of June, it boasted 390 members.

Attendees play games while visiting the Discord booth at the Game Developers Conference 2023 in San Francisco. (Jeff Chiu/AP)

“When we founded Frost Call, we built an organization around this idea of bringing veterans together, helping to improve camaraderie that’s missing from military service,” Marine Corps veteran Wesley Sanders said last year. “It serves an enormous mental health need, but also ... an existential need for a lot of veterans.”

Moreover, when new users join Discord, extremist elements of the platform are not easily visible.

Discord is made up of millions of servers centered on various topics. Users can join up to 100 servers, and each server has numerous text, voice and video channels. When a new user creates an account and searches servers to join, the platform will suggest “its most popular, most successful, public-facing communities,” rather than any disquieting, invite-only communities, Berge said.

“If you are a standard user, and if you’re signing in to Discord for your general interests — maybe you’re looking for fellow students or fellow veterans — 90% of the time, you’re not going to accidentally stumble upon an extremist group,” she said. “They actually go through a lot of effort to make these spaces insulated, to make them difficult to find.”

When using Disboard, a third-party search platform for Discord servers, prompts such as “Nazi” or “white supremacist” won’t elicit results like they used to, Berge said. In a 2021 study, she found thousands of Discord servers that marketed themselves on Disboard as hateful and Nazi-affiliated spaces.

“You used to be able to search for those terms and find communities. It was horrifying,” Berge said. “Those servers still exist, but they’ve changed the ways they’re identified, and in some cases, we know that high-profile, toxic communities have been shut down.”

A screenshot taken from a research paper titled,

Extremists find a foothold

Founders Jason Citron and Stan Vishnevskiy created Discord in 2015 as a way to allow friends around the world to communicate while playing video games online. Its popularity exploded during the Covid-19 pandemic, when lockdowns went into effect and many people became more isolated than ever before.

Just two years after it launched, Discord gained notoriety as the platform of choice for facilitators of the 2017 “Unite the Right” rally in Charlottesville, Virginia. Organizers, including some veterans, used Discord to share propaganda and coordinate the protest, which turned deadly. James Fields was convicted of killing Heather Heyer when he drove his car into a group of counterprotesters. Fields had joined the Army in 2015 but was separated quickly because of a cited lack of motivation and failure to train.

In 2022, Discord made headlines again after a mass shooting at an Independence Day parade in Highland Park, Illinois, where seven people were killed and dozens more injured. The suspected shooter ran his own Discord server called “SS,” where he complained about “commies,” short for “communists,” according to posts archived by the nonprofit website Unicorn Riot.

That same year, an 18-year-old white gunman killed 10 Black people at a supermarket in Buffalo, New York. The gunman, Payton Gendron, spent months writing plans for the attack in a diary he kept on a private Discord server, visible only to him. About 30 minutes before the attack, Gendron sent out invitations for others to view the diary, and 15 people accessed it, according to Discord.

The platform again faced scrutiny following Teixeira’s leak of national security secrets.

“It’s periodic. Every couple of years, it seems like there’s something,” Squire said. “There are other platforms that are worse, but Discord keeps coming up over and over again.”

White nationalist demonstrators walk into Market Street Park surrounded by counterdemonstrators in Charlottesville, Virginia, on Aug. 12, 2017. (Steve Helber/AP)

Research institutions such as the Institute for Strategic Dialogue found that Discord serves as a hub for socializing and community-building across far-right groups, including Catholic extremists, the white supremacist Atomwaffen Division and the antigovernment Boogaloo movement.

Extremist groups value the platform’s layers of privacy and anonymity, as well as its chat and video functions and collaborative nature, Guhl said. Berge described it as a walled garden, or an online environment where user access to content can be controlled. Servers come with the capability to assign hierarchy to different members and allow some members to access information that others can’t, the researchers said.

“In, say, a Twitter direct-messaging thread or Facebook DM, you don’t really have levels and hierarchies,” Squire said. “Discord really allows you to have more fine-grained ranking structures.”

Another reason for the prevalence of extremists on the platform stems from its roots in gaming, Guhl surmised.

Rachel Kowert, a globally recognized researcher on gaming and mental health, has spent five years researching extremism in video game communities. Though gaming itself is a powerful tool for connection and growth, extreme and hateful ideologies are now commonplace in those spaces, Kowert said.

“If you’re spending a lot of time in the social or gaming spaces where misogyny is commonplace, that can in turn start to internalize in the way you see the world and interact in it,” Kowert said.

Fighting a dark legacy

The existence of far-right groups on Discord — and the high-profile instances of extremism on the platform in the past several years — has spawned its “extremist legacy,” one from which it’s now trying hard to distance itself, said Berge.

Discord said it removed more than 2,000 far-right-affiliated servers following the “Unite the Right” rally. After the Buffalo killings, it removed Gendron’s server and worked to prevent the spread of content related to the attack, the company said. At that point, Discord agreed it “must do more to remove hate and violent extremism.”

Discord CEO Jason Citron testifies during a Senate Judiciary Committee hearing on Capitol Hill on Jan. 31, 2024. (Manuel Balce Ceneta/AP)

“We created Discord to be a place for people to find belonging, and hate and violence are in direct opposition to our mission,” the company said in a statement at the time. “We take our commitment to these principles seriously and will continue to invest in and deploy resources.”

Earlier this year, the company reported that 15% of its staff works on its user safety team, which cracks down on harassment, hateful conduct, inappropriate contact, violent and abusive imagery, violent extremism, misinformation, spam, fraud, scams and other illegal behavior.

During the investigations into Teixeira and Jason Gray, Discord officials immediately cooperated with law enforcement, a company spokesperson told Military Times. And in recent months, Discord has leaned on machine-learning technology to moderate content.

“We expressly prohibit using Discord for illegal activity, which includes the unauthorized disclosure of classified documents,” the spokesperson said.

The company publishes reports each quarter showing actions taken against various accounts and servers. The latest report, published in January, says Discord disabled 6,109 accounts and removed 627 servers that espoused violent extremism during the last few months of 2023.

Squire and Guhl agreed that Discord is “pretty good” at responding to extremist content. Guhl credited the company for including extremism and hate speech in its community guidelines, as well as for deleting servers on a regular basis that breach its terms of service. Discord also created a channel where Squire could flag questionable content on the platform, and the company has been receptive to the concerns she’s raised, she said.

“I credit where credit is due, and I have to give them credit for that,” Squire said. “I think it’s taken seriously, and there are other platforms that I could not say that about.”

Extremists are ‘absolutely still there’

Berge applauded Discord for ramping up the technology behind its moderation and for introducing IP bans, which restrict a device from accessing the platform, rather than just an account. Still, she sees room for improvement.

Discord should place more emphasis on educating moderators and users about how to recognize when someone is being radicalized and pulled into an extremist space, Berge said. She also criticized the platform for disbanding a program in 2023 that included hundreds of volunteer moderators.

“It wasn’t Discord’s automated flagging systems that caught national security secrets being leaked by Jack Teixeira. It took other users and community moderators digging into it and someone finally reporting it,” Berge said. “Elevating people and giving them tools to moderate is absolutely central to protecting the platform, and that’s one area where I think they’re taking a step back.”

Berge is still researching communities on Discord, four years after she first uncovered a network of white supremacists using the platform as a recruitment ground. Despite its community guidelines and efforts to remove offending servers and accounts, Discord still serves as a meeting place for pockets of extremism.

“They’re harder to find, but they are absolutely still here. We’re still finding them,” Berge said. “It is still one of the most popular spaces for people to congregate, share and be in community with each other, for better or for worse.”

Discord remains the “platform of choice” for some hate groups, noted Squire, who described the company’s fight against extremists as playing whack-a-mole: As soon as one is removed, another pops up. A lack of institutional knowledge among far-right extremist groups is partly to blame, she said.

“Everybody’s always fresh, and they don’t have any structure for teaching one another and learning from mistakes of the past,” Squire said. “That’s convenient for us, because as we keep amassing knowledge, they make the mistake of reusing the technology that’s most convenient, rather than being strategic.”

This story was produced in partnership with Military Veterans in Journalism. Please send tips to MVJ-Tips@militarytimes.com.

]]>
<![CDATA[Navy should hit back harder against Houthi online disinformation]]>https://www.c4isrnet.com/opinion/2024/07/10/navy-should-hit-back-harder-against-houthi-online-disinformation/https://www.c4isrnet.com/opinion/2024/07/10/navy-should-hit-back-harder-against-houthi-online-disinformation/Wed, 10 Jul 2024 09:02:00 +0000Last month, Capt. Christopher “Chowdah” Hill, commanding officer of the aircraft carrier Dwight D. Eisenhower, invited journalists to inspect the flight deck of his carrier while it was underway in the Red Sea.

The journalists reported seeing nothing wrong on the flight deck, which was precisely the point of Hill’s invitation. Ike and its crew remained on station, with no hole in the deck.

Two weeks earlier, a spokesman for Yemen’s Houthi rebel movement announced that the rebels had struck the Eisenhower with a barrage of missiles to punish the United States for its support of Israel in its war against Hamas.

On X (formerly Twitter), Houthi supporters shared a video allegedly showing a large crater at the forward end of the Eisenhower’s flight deck. Other accounts posted a different image of a fiery blast aboard the ship.

The purported evidence of a strike spread quickly across Chinese and Russian social media platforms, thanks in part to the efforts of Russian sites with a reputation in the West for spreading disinformation.

Despite false Houthi claims, the Ike aircraft carrier fights on

The Houthis’ online conjuring of a successful attack on Ike that never happened complements their months-long campaign to disrupt commercial shipping in the Red Sea that has sunk commercial vessels and injured civilian mariners.

And while the U.S. military and allies regularly hit back with airstrikes against Houthi missile launchers and other assets in Yemen, the Pentagon is less prepared to defend against the online lies and disinformation that the Houthis are spreading.

In the instance of the false Ike attack, Capt. Hill took matters into his own hands, leveraging his 86,000 followers on X. The day after the false claims emerged, Hill began to post videos and still images showing normal operations aboard his ship, including a plane landing on the flight deck and trays of muffins and cinnamon buns fresh from the oven in the ship’s bakery.

Meanwhile, independent analysts exposed how the Houthis generated their false evidence of a missile strike on the Eisenhower.

An Israeli analyst demonstrated that the supposed photograph of a crater on the carrier’s flight deck consisted of a stock image of a hole superimposed on an overhead shot of the Eisenhower taken from satellite imagery dated almost a year before the alleged strike.

The fictional attack on Ike did not come as a surprise to anyone tracking Houthi disinformation efforts. In an ironic example from March, a Telegram channel and a pro-Houthi website shared an AI-generated image of a burning vessel they identified as the Pinocchio, an actual commercial ship the Houthis had targeted but missed.

The Houthis’ supporters had pulled their supposed evidence from a website that shared free stock images. However, no one from the Pentagon officially debunked this image as the Israeli analyst did for the fake photos of Ike.

In addition to these forgeries, pro-Houthi accounts have posted actual images of commercial vessels in flames, claiming the destruction resulted from Houthi attacks.

Yet in those cases, one image showed a burning ship on the Black Sea while another showed events that took place off the coast of Sri Lanka. Pro-Houthi posters even attempted to portray a blurry photo of a distant volcano as a successful strike on an Israeli ship.

This deluge of deceptively labeled images spread was also met with crickets from the Pentagon.

The U.S. military appears to grasp the need to counter disinformation spread by the Houthis and other regional adversaries. In February, the Joint Maritime Information Center, or JMIC, launched its efforts to provide accurate information to shipping companies about Houthi strikes, both real and imagined.

The JMIC operates under the umbrella of the Combined Maritime Forces – a naval partnership of 44 nations under the command of the top U.S. admiral in the region, who also serves as commander of U.S. 5th Fleet.

This is a start, but the Navy has yet to show that it can debunk false information as quickly as the Houthis post it online.

It is fortunate that an Israeli civilian had the skill and commitment necessary to expose the alleged crater aboard the Eisenhower as a work of photoshopping. He posted his conclusions on X four days after the Houthis publicized the supposed attack. Ideally, the Navy itself should be prepared to debunk such propaganda as soon as it appears.

Standing up this kind of capability should be a priority for the JMIC, which could include such efforts in its existing weekly updates.

It is important to act now before the Houthis’ disinformation apparatus becomes more sophisticated. Already, one of its supporters’ fake images of a burning ship garnered 850,000 views on X.

Moreover, the challenge is not limited to the Red Sea or the Middle East. Military forces in every command should have public affairs and open-source intelligence personnel working together to debunk false and exaggerated claims of enemy success on the battlefield.

Max Lesser is senior analyst on emerging threats at The Foundation for the Defense of Democracies, a non-profit, non-partisan think tank.

]]>
Petty Officer 2nd Class Merissa
<![CDATA[DOJ ‘deepfake’ concerns over Biden audio interview highlight AI worry]]>https://www.c4isrnet.com/federal-oversight/2024/06/04/doj-deepfake-concerns-over-biden-audio-interview-highlight-ai-worry/https://www.c4isrnet.com/federal-oversight/2024/06/04/doj-deepfake-concerns-over-biden-audio-interview-highlight-ai-worry/Tue, 04 Jun 2024 18:14:33 +0000(AP) — Releasing an audio recording of a special counsel’s interview with President Joe Biden could spur deepfakes and disinformation that trick Americans, the Justice Department said, conceding the U.S. government could not stop the misuse of artificial intelligence ahead of this year’s election.

A senior Justice Department official raised the concerns in a court filing on Friday that sought to justify keeping the recording under wraps. The Biden administration is seeking to convince a judge to prevent the release of the recording of the president’s interview, which focused on his handling of classified documents.

The admission highlights the impact the AI-manipulated disinformation could have on voting and the limits of the federal government’s ability to combat it.

A conservative group that’s suing to force the release of the recording called the argument a “red herring.”

Mike Howell of the Heritage Foundation accused the Justice Department of trying to protect Biden from potential embarrassment. A transcript of the interview showed the president struggling to recall certain dates and confusing details but showing a deep recall of information at other times.

“They don’t want to release this audio at all,” said Howell, executive director of the group’s oversight project. “They are doing the kitchen sink approach and they are absolutely freaked out they don’t have any good legal argument to stand on.”

The Justice Department declined to comment Monday beyond its filing.

Biden asserted executive privilege last month to prevent the release of the recording of his two-day interview in October with special counsel Robert Hur. The Justice Department has argued witnesses might be less likely to cooperate if they know their interviews might become public. It has also said that Republican efforts to force the audio’s release could make it harder to protect sensitive law enforcement files.

Republican lawmakers are expected to press Attorney General Merrick Garland at a hearing on Tuesday about the department’s efforts to withhold the recording. According to prepared remarks, Garland will tell lawmakers on the House Judiciary Committee that he will “not be intimidated” by Republican efforts to hold him in contempt for blocking their access to the recording.

Sen. Mark Warner, the Democratic chair of the Senate Intelligence Committee, told The Associated Press that he was concerned that the audio might be manipulated by bad actors using AI. Nevertheless, the senator said, it should be made public.

“You’ve got to release the audio,” Warner said, though it would need some “watermarking components, so that if it was altered” journalists and others “could cry foul.”

In a lengthy report, Hur concluded no criminal charges were warranted in his handling of classified documents. His report described the 81-year-old Democrat’s memory as “hazy,” “poor” and having “significant limitations.” It noted that Biden could not recall such milestones as when his son Beau died or when he served as vice president.

Biden’s aides have long been defensive about the president’s age, a trait that has drawn relentless attacks from Donald Trump, the presumptive GOP nominee, and other Republicans. Trump is 77.

The Justice Department’s concerns about deepfakes came in a court papers filed in response to legal action brought under the Freedom of Information Act by a coalition of media outlets and other groups, including the Heritage Foundation and the Citizens for Responsibility and Ethics in Washington.

An attorney for the media coalition, which includes The Associated Press, said Monday that the public has the right to hear the recording and weigh whether the special counsel “accurately described” Biden’s interview.

“The government stands the Freedom of Information Act on its head by telling the Court that the public can’t be trusted with that information,” the attorney, Chuck Tobin, wrote in an email.

Bradley Weinsheimer, an associate deputy attorney general for the Justice Department, acknowledged “malicious actors” could easily utilize unrelated audio recordings of Hur and Biden to create a fake version of the interview.

However, he argued, releasing the actual audio would make it harder for the public to distinguish deepfakes from the real one.

“If the audio recording is released, the public would know the audio recording is available and malicious actors could create an audio deepfake in which a fake voice of President Biden can be programed to say anything that the creator of the deepfake wishes,” Weinsheimer wrote.

Experts in identifying AI-manipulated content said the Justice Department had legitimate concerns in seeking to limit AI’s dangers, but its arguments could have far-reaching consequences.

“If we were to go with this strategy, then it is going to be hard to release any type of content out there, even if it is original,” said Alon Yamin, co-founder of Copyleaks, an AI-content detection service that primarily focuses on text and code.

Nikhel Sus, deputy chief counsel at Citizens for Responsibility and Ethics in Washington, said he has never seen the government raise concerns about AI in litigation over access to government records. He said he suspected such arguments could become more common.

“Knowing how the Department of Justice works, this brief has to get reviewed by several levels of attorneys,” Sus said. “The fact that they put this in a brief signifies that the Department stands behind it as a legal argument, so we can anticipate that we will see the same argument in future cases.”

]]>
NATHAN HOWARD
<![CDATA[Disinformation creates ‘precarious year for democracy,’ experts warn]]>https://www.c4isrnet.com/flashpoints/extremism-disinformation/2024/05/13/disinformation-creates-precarious-year-for-democracy-experts-warn/https://www.c4isrnet.com/flashpoints/extremism-disinformation/2024/05/13/disinformation-creates-precarious-year-for-democracy-experts-warn/Mon, 13 May 2024 19:35:04 +0000As wildfires swept across Maui in 2023, killing more than 100 people and causing widespread destruction, another damaging force was spreading online.

Chinese actors exploited the unfolding chaos and took to social media, where they shared a conspiracy that the fire was the result of a “meteorological weapon” being tested by the U.S. Department of Defense. According to new analysis from Microsoft Corporation, a worldwide technology company, the Chinese accounts posted photos that were created with generative artificial intelligence, which uses new technology that can create images from written prompts.

The situation exemplifies two challenges that some experts are warning about ahead of the presidential election in November: The use of generative AI to create fake images and videos, and the emergence of China as an adversary that stands ready and willing to target the United States with disinformation. Academics are also voicing concerns about a proliferation of alternative news platforms, government inaction on the spread of disinformation, worsening social media moderation and increased instances of public figures inciting violence.

An environment rife for disinformation is coinciding with a year during which more than 50 countries are holding high-stakes elections. Simply put, it’s a “very precarious year for democracy,” warned Mollie Saltskog, a research fellow at The Soufan Center, a nonprofit that analyzes global security challenges. Some of the messaging meant to sow division is reaching veterans by preying on their sense of duty to the U.S., some experts warned.

“Conspiracy theories are a threat to vulnerable veterans, and they could drag your loved ones into really dark and dangerous places,” said Jacob Ware, a research fellow at the Council on Foreign Relations who published a book this year about domestic terrorism.

“The seeds are there for this kind of activity again, and we need to make the argument that protecting people from conspiracy theories is in their best interest, not just in the country’s.”

The threat of state-backed disinformation

China kept to the sidelines during the 2016 and 2020 presidential elections, watching as Russia targeted the U.S. with chaos-inducing disinformation, according to the U.S. Intelligence community. But Beijing’s disinformation capabilities have increased in recent years, and the country has proven a willingness to get involved, Saltskog said.

In a report published in April, Microsoft warned that China was preparing to sow division during the presidential campaign by using fake social media accounts to discover what issues divide Americans most.

The accounts claimed that a train derailment in Kentucky was deliberately caused by the U.S. government, and they polled Americans about issues like climate change, border policies and racial tensions. China uses the information gathered to “understand better which U.S. voter demographic supports what issue or position and which topics are the most divisive, ahead of the main phase of the U.S. presidential election,” the technology behemoth warned.

Microsoft Corporation reported in April that inauthentic Chinese social media accounts were attempting to learn more about what divides U.S. voters in order to prepare for a disinformation campaign ahead of the November presidential election. (Screenshot of Microsoft report)

“The primary concern today, in our assessment, is Russia, China and Iran, in that order,” Saltskog said. “Specifically, when we talk about China … we’ve seen with the Hawaii wildfire example that they have AI-powered capabilities to produce disinformation campaigns targeting the U.S. It’s certainly very concerning.”

Envoys from China and the U.S. are set to meet this week to discuss the risks of artificial intelligence and how the countries could manage the technology. A senior official in President Joe Biden’s administration told reporters Friday that the agenda does not include election interference, but noted the topic might come up in discussion.

“In previous engagements, we have expressed clear concerns and warnings about any [Peoples Republic of China] activity in this space,” a senior administration official said. “If the conversation goes in a particular direction, then we will certainly continue to express those concerns and warnings about activity in that space.”

Logically, a British tech company that uses artificial intelligence to monitor disinformation around the world, has been tracking Russian-sponsored disinformation for years. Kyle Walter, the company’s head of research, believes Russia is positioned to increase its spread of falsehoods during the run-up to the November election, likely focusing on divisive issues, such as immigration and the U.S. economy.

Russia isn’t seeking to help one candidate over another, Walter added. Rather, it’s trying to sow chaos and encourage Americans to question the validity and integrity of their voting process.

Microsoft’s threat analysis center published another report at the end of April, saying that Russian influence operations to target the November election have already begun. The propaganda and disinformation campaigns are starting at a slower tempo than in 2016 and 2020, but they’re more centralized under the Russian Presidential Administration than in the past, Microsoft officials said.

So far, Russia-affiliated accounts have been focusing on undermining U.S. support for Ukraine, pushing disinformation meant to portray Ukrainian President Volodymyr Zelensky as unethical and incompetent, and arguing that any American aid to Ukraine was directly supporting a corrupt and conspiratorial regime, the report states.

Storm-1099, a Russian influence actor, is using alternative news sites to sow division and undermine U.S. democracy ahead of the November presidential election.

State-backed disinformation campaigns like these have used images of U.S. service members and targeted troops and veterans during the previous two presidential elections.

A study from Oxford University in 2017 found Russian operatives disseminated “junk news” to veterans and service members during the 2016 election. In 2020, Vietnam Veterans of America warned that foreign adversaries were aiming disinformation at veterans and service members at a massive scale, posing a national security threat.

“It’s certainly happened historically, and it’s certainly a threat to be aware of now,” Saltskog said.

A social media ‘hellscape’

In March, the pause in public sightings of Kate Middleton, along with the lack of updates regarding her health following abdominal surgery, created a breeding ground for conspiracy theories.

Scores of memes and rumors spread in an online fervor until Middleton posted a video March 20, in which the Princess of Wales shared the news that she had been diagnosed with an undisclosed type of cancer. Some who shared conspiracies responded with regret.

The situation offered a public example of how conspiracy thinking snowballs on social media platforms, as well as the real harm it can cause, said AJ Bauer, an assistant professor at the University of Alabama, where he studies partisan media and political communications.

“It does reinforce the fact that social media gives us an opportunity to crowdsource and spin conspiracy theories and conspiracy thinking,” Bauer said. “It can be done for a kind of whimsy, or it can be done for harm, and the line between whimsy and harm is a fine one. It can tip from one to the other pretty quickly.”

Social media platforms were blamed during the presidential election seasons in 2016 and 2020 for the influential campaigns that gained traction on their sites. Since then, the situation has only worsened, several experts argued.

What are disinformation and extremism? And why should troops be aware?

Bauer specifically blamed recent changes at X, formerly Twitter, which was taken over in 2022 by billionaire Elon Musk.

Musk has used the platform to endorse at least one antisemitic conspiracy theory, and several large corporations withdrew from the platform after their ads were displayed alongside pro-Nazi content. Bauer described the site as a “hellscape” that is “objectively worse than it was in 2020 and 2016.”

“One really stark difference is that in 2020, you had a lot of big social media platforms like Twitter and Meta trying at least to mitigate disinformation and extremist views because they got a lot of blame for the chaos around the 2016 election,” Bauer said. “Part of what you see going into this election is that those guardrails are down, and we’re going to experience a free-for-all.”

In addition to the changes at X, layoffs last year struck content-moderation teams at Meta, Amazon, and Alphabet (the owner of YouTube), and have led to fears that the platforms would not be able to curb online abuse or remove deceptive disinformation.

“That timing could not be worse,” Saltskog said. “We’re going to have a much thinner bench when it comes to the people inside social media platforms who do this.”

Kurt Braddock, an assistant professor at American University who studies extremist propaganda, argued that social media platforms don’t have financial incentive to moderate divisive or misleading content.

“Their profits are driven by engagement, and engagement is often driven by outrage,” Braddock said.

Because of this, Braddock believes there should be more focus placed on teaching people — especially younger people — how to spot disinformation online.

A study last year by the Center for Countering Digital Hate found that teenagers were more likely than U.S. adults to believe conspiracy theories. Another study by the University of Cambridge found that people ages 18 to 29 were worse than older adults at identifying false headlines, and the more time people spent online recreationally, the less likely they were able to spot misinformation.

The average age of the active-duty military is 28.5, according to a Defense Department demographic profile published in 2023, and new recruits are typically in their early 20s. Because of the young age of the force, Braddock thinks the Defense Department should be involved in teaching news literacy.

The Pentagon and Department of Veterans Affairs did not respond to a request for comment about any efforts to help service members and veterans distinguish accurate information online.

“They’ve grown up in the digital age, and for some it’s been impossible to differentiate what’s real and what’s not real,” Braddock said. “They’ve essentially been thrown to the wolves and don’t have the education to be able to distinguish the two. I think there needs to be a larger effort toward widespread media literacy for young people, especially in populations like the military.”

Elon Musk, co-founder of Tesla and SpaceX and owner of X Holdings Corp., speaks at the Milken Institute's Global Conference at the Beverly Hilton Hotel,on May 6, 2024 in Beverly Hills, California. (Photo by Apu Gomes/Getty Images)

AI speeds disinformation

Overall, people are getting better at spotting disinformation because of awareness efforts over the past several years, Braddock believes.

However, just as more people were becoming accustomed to identifying false information, the landscape changed, he said. Generative AI gained traction last year, prompting the launch of tools that can create new images and videos, called deepfakes, from written descriptions.

Since Hamas launched a surprise attack on Israel on October 7, these digital tools have been used to create propaganda about the ensuing conflict. Some AI images portray injured or frightened Palestinian children running from air strikes in Gaza, while others depict crowds of people waving Israeli flags and cheering for the Israeli Defense Forces.

“This is a new thing, and most people aren’t prepared to differentiate between what’s real and what’s not,” Braddock said. “We need to stay on top of these different technologies which can be used to reach a large amount of people with propaganda. It only takes one person to do a lot of damage.”

The Soufan Center and Council on Foreign Relations consider AI to be the top concern heading into November. The technology is developing faster than Congress can work to regulate it, and social media companies are likely to struggle to moderate AI-generated content leading up to the election, Ware said.

As Congress grapples with reining in AI, some states are taking the lead and scrambling to pass legislation of their own. New Mexico is the latest to pass a law that will require political campaigns to provide a clear disclaimer when they use AI in their advertisements. California, Texas, Washington, Minnesota and Michigan have passed similar laws.

Ware said that among those working to counter domestic terrorism, AI has been treated like “a can getting kicked down the road” — a known problem that’s been put off repeatedly.

“We knew it was a specter that was out there that would one day really affect this space,” Ware said. “It’s arrived, and we’ve all been caught unprepared.”

Artificial intelligence is supercharging the threat of election disinformation worldwide, making it easy for anyone to create fake – but convincing – content aimed at fooling voters. (AP Photo/Markus Schreiber, File)

The technology can accelerate the speed and scale at which people can spread conspiracy theories that radicalize others into extremist beliefs, Ware added. At The Soufan Center, researchers have found that generative AI can produce high quantities of false content, which the center warned could have “significant offline implications” if it’s used to call for violent action.

“You can create more false information at scale. You can just ask an AI-powered language model, ‘Hey, create X number of false narratives about this topic,’ and it will generate it for you,” Saltskog said. “Just the speed and scale and efficacy of it is very, deeply concerning to us experts working in this field.”

The creation of AI images and videos is even more concerning because people tend to believe what they can see, Saltskog said. She suggested people look at images and videos carefully for telltale signs of digital deception.

The technology is still developing, and it’s not perfect, she said. Some signs of deepfakes could be hands that have too many or too few fingers, blurry spots, the foreground melding into the background and speech not aligning with how the subject’s mouth is moving.

“These are things the human brain catches onto. You’re aware of it and attune to it,” Saltskog said. “Your brain will say, ‘Something is off with this video.’”

As Congress and social media platforms lag to regulate AI and moderate disinformation, Americans have been left to figure it out for themselves, Ware argued. Bauer suggested people do their homework about their source of news, which includes determining who published it, when it was published and what agenda the publisher might have. Saltskog advised people to be wary of anything that elicits a strong emotion because that’s the goal of those pushing propaganda.

Similarly, Ware recommended that if social media users see something that seems unbelievable, it likely is. He suggested they look for other sources providing that same information to help determine if it’s true.

“People are going to take it upon themselves to figure this out, and it’s going to be through digital literacy and having faith in your fellow Americans,” Ware said. “The stories that are trying to anger you or divide you are probably doing so with an angle, as opposed to a pursuit of the truth.”

This story was produced in partnership with Military Veterans in Journalism. Please send tips to MVJ-Tips@militarytimes.com.

]]>
Alex Brandon
<![CDATA[China dissolves Strategic Support Force, focused on cyber and space]]>https://www.c4isrnet.com/global/asia-pacific/2024/04/23/china-dissolves-strategic-support-force-focused-on-cyber-and-space/https://www.c4isrnet.com/global/asia-pacific/2024/04/23/china-dissolves-strategic-support-force-focused-on-cyber-and-space/Tue, 23 Apr 2024 14:03:28 +0000CHRISTCHURCH, New Zealand — China has disbanded and replaced its Strategic Support Force, a pivotal component of the People’s Liberation Army’s modernization efforts.

The Strategic Support Force, or SSF, was created on Dec. 31, 2015. It existed for a little more than eight years.

After China dissolved the SSF on April 19, it established an Information Support Force, with President Xi Jinping present at its investiture ceremony in Beijing the same day.

Its first commander is Lt. Gen. Bi Yi, a former deputy commander of the SSF. The Information Support Force is directly subordinate to the Central Military Commission, the top political party organ that oversees China’s armed forces.

Senior Col. Wu Qian, a Defense Ministry spokesperson, said the change is part of “building a strong military, and a strategic step to establish a new system of services and arms and improve the modern military force structure.”

He added that the Information Support Force underpins “coordinated development and application of network information systems.” This suggests it is responsible for command and control, information security, and intelligence dissemination.

He also said the move would have “profound and far-reaching significance” on PLA modernization. However, Brendan Mulvaney, the director of the U.S. Air Force’s China Aerospace Studies Institute, told Defense News it’s unlikely to be “as big of a shift as the 2015-2016 reforms,” which overhauled the PLA.

The military considers the information domain as important as the four traditional air, land, sea and space domains.

The PLA now has three nascent arms — the Information Support Force, Cyberspace Force and Aerospace Force. It appears the latter two were existing SSF departments that China renamed.

After the shakeup, the PLA’s new organization features four services and four arms: the existing PLA Army, Navy, Air Force and Rocket Force services, while the three previously mentioned arms sit alongside a fourth, the incumbent Joint Logistics Support Force.

The Cyberspace Force will subsume the responsibilities of the SSF’s former Network Systems Department, whose mandate was offensive and defensive cyber operations.

Indeed, the Defense Ministry described the Cyberspace Force’s role as “reinforcing national cyber border defense, promptly detecting and countering network intrusions and maintaining national cyber sovereignty and information security”.

The Aerospace Force will take on the charge of the SSF’s Space Systems Department, meaning it will supervise space operations and space launches. Wu said the force will “strengthen the capacity to safely enter, exit and openly use space.”

The ministry said that “as circumstances and tasks evolve, we will continue to refine the modern military force structure.”

Xi has repeatedly urged the PLA to do two things: modernize its readiness structure for high-tech combat, and to loyally follow party diktats.

He has now ordered the Information Support Force to “resolutely obey the party’s command and make sure it stays absolutely loyal, pure and reliable.”

]]>
<![CDATA[AUKUS allies developing undersea capabilities they can field this year]]>https://www.c4isrnet.com/unmanned/2024/04/18/aukus-allies-developing-undersea-capabilities-they-can-field-this-year/https://www.c4isrnet.com/unmanned/2024/04/18/aukus-allies-developing-undersea-capabilities-they-can-field-this-year/Thu, 18 Apr 2024 19:21:09 +0000Though the submarine portion of the AUKUS trilateral alliance will take decades to fully come to fruition, development of the advanced technology under the agreement is in full swing, as Australia, the U.K. and the U.S. seek quick wins for their fleets, officials said.

Pillar 2 of the agreement focuses on advanced tech the nations can develop and field together. There are eight working groups focused on cyber, quantum, artificial intelligence, electronic warfare, hypersonics, undersea warfare, information sharing, and innovation, each with a list of ideas to quickly test and push to operators.

Leaders told Defense News how this process is playing out in the undersea warfare working group and how they aim to bring new capabilities to the three navies as soon as this year.

Dan Packer, the AUKUS director for the Commander of Naval Submarine Forces who also serves as the U.S. lead for the undersea warfare working group, said April 4 that the group has four lines of effort: a torpedo tube launch and recover capability for a small unmanned underwater vehicle; subsea and seabed warfare capabilities; artificial intelligence; and torpedoes and platform defense.

An Iver3-580 Autonomous Underwater Vehicle is put on display at Marine Corps Base Hawaii, Sept. 6, 2017. (Cpl. Jesus Sepulveda Torres/US Marine Corps)

On the small UUV effort, the U.S. Navy on its own in 2023 conducted successful demonstrations: one called Yellow Moray on the West Coast using HII’s Remus UUV; and another called Rat Trap on the East Coast using an L3Harris-made UUV.

L3Harris’ Integrated Mission Systems president Jon Rambeau told Defense News in March that his team had started with experiments in an office using a hula hoop with flashlights attached, to understand how sensors perceive light and sound. They moved from the office to a lab and eventually into the ocean, with a rig tethered to a barge that allowed the company’s Iver autonomous underwater vehicle to, by trial and error, learn to find its way into a small box that was stationary and then, eventually, moving through the water.

Torpedo tube launch

Rambeau said the UUV hardware is inherently capable of going in and out of the torpedo tube, but there’s a software and machine learning challenge to help the UUV learn to navigate various water conditions and safely find its way back into the submarine’s torpedo tube.

Virginia-class attack submarines can silently shoot torpedoes from their launch tubes without giving away their location. If submarines can also fill their tubes with small UUVs, they’d gain the ability to stealthily expand their reach and surveil a larger area around the boat.

During a panel discussion at the Navy League’s annual Sea Air Space conference on April 8, U.K. Royal Navy Second Sea Lord Vice Adm. Martin Connell told Defense News that his country, too, would accelerate its work on developing this capability. He said the U.K. plans to test it on an Astute-class attack submarine this year, and then based on what worked for the U.K. and the U.S., they’d determine how to scale up the capability.

Packer said this effort will “make UUV operations ubiquitous on any submarine. Today, it takes a drydock shelter. It takes divers. It takes a whole host of Rube Goldberg kinds of things. Once I get torpedo tube launch and recovery, it’s just like launching a torpedo, but they welcome him back in.”

He added that the team agreed not to integrate this capability onto Australia’s Collins-class conventionally powered submarines now, but Australia will gain this capability when it buys the first American Virginia-class attack submarine in 2032.

Commander Sean Heaton of the U.K. Royal Navy presents the capabilities used on board HMS Tamar during the Integrated Battle Problem 23.3 exercise near Sydney, Australia. The exercise tested a range of autonomous systems operating from the Royal Australian Navy's Mine Hunter Coastal HMAS Gascoyne, Undersea Support Vessel ADV Guidance and the UK's Off-Shore Patrol Vessel HMS Tamar. (LSIS David Cox/Royal Australian Navy)

On subsea and seabed warfare, Packer said all three countries have an obligation to defend their critical undersea infrastructure. He noted the U.K. and Australia had developed ships that could host unmanned systems that can scan the seabed and ensure undersea cables haven’t been tampered with, for example.

Connell said during the panel the British and Australian navies conducted an exercise together in Australia involving seabed warfare. This effort took just six months from concept to trial, he said, adding he hopes the team can continue to develop even greater expeditionary capability through this line of effort.

Packer said a next step would be collectively developing effectors for these seabed warfare unmanned underwater vehicles — “what are the hammers, the saws, the screwdrivers that I need to develop for these UUVs to get effects on the seabed floor, including sensors.”

A P-8A Poseidon assigned to Patrol Squadron (VP) 46 takes off from the runway at Naval Air Station (NAS) Sigonella, Italy, Jan. 17, 2024.  (MC2 Jacquelin Frost/US Navy)

The primary artificial intelligence effort today involves the three nation’s P-8 anti-submarine warfare airplanes, though it will eventually expand to the submarines themselves.

Packer said the nations created the first secure collaborative development environment, such that they can all contribute terabytes of data collected from P-8 sensors. The alliance, using vendors from all three countries’ industrial bases, is working now to create an artificial intelligence tool that can identify all the biological sources of sounds the P-8s pick up — everything from whales to shrimp — and eliminate those noises from the picture operators see. This will allow operators to focus on man-made sounds and better identify potential enemy submarines.

Packer said the Navy never had the processing power to do something like this before. Now that the secure cloud environment exists, the three countries are moving out as fast as they can to train their AI tools “to detect adversaries from that data … beyond the level of the human operator to do so.”

For now, the collaboration is focused on P-8s, since foreign military sales cases already exist with the U.K. and Australia to facilitate this collaboration.

Connell, without specifying the nature of the AI tool, said the U.K would put an application on its P-8s this year to enhance their onboard acoustic performance.

Packer noted the U.S. is independently using this capability on an attack submarine today using U.S.-only vendors and algorithms, but the AUKUS team plans to eventually share the full automatic target recognition tool with all three countries’ planes, submarines and surface combatants once the right authorities are in place.

Sailors assigned to the U.S. Navy submarine Minnesota prepare an MK 48 torpedo at the Haakonsvern Naval Base in Bergen, Norway, in 2019. (Chief MC Travis Simmons/U.S. Navy)

And finally, Packer said the fourth line of effort is looking at the collective inventory of torpedoes and considering how to create more capability and capacity. Both the U.S. and U.K. stopped building torpedoes decades ago, and the U.S. around 2016 began trying to restart its industrial base manufacturing capability.

“The issue is that none of us have sufficient ordnance-, torpedo-building capability,” Packer said, but the group is looking at options to modernize British torpedoes and share in-development American long-range torpedoes with the allies ­— potentially through an arrangement that involves a multinational industrial base.

Vice Adm. Rob Gaucher, the commander of U.S. Naval Submarine Forces, said during the panel discussion that these AUKUS undersea warfare lines of effort closely match his modernization priorities for his own submarine fleet.

Pursuing these aims as part of a coalition, he said, strengthens all three navies.

“The more we do it, and the faster we do it, and the more we get it in the hands of the operators, the better. And then having three sets of operators to do it makes it even better,” Gaucher said.

]]>
LSIS David Cox
<![CDATA[Information warfare becoming a critical submarine capability: Aeschbach]]>https://www.c4isrnet.com/information-warfare/2024/04/10/information-warfare-becoming-a-critical-submarine-capability-aeschbach/https://www.c4isrnet.com/information-warfare/2024/04/10/information-warfare-becoming-a-critical-submarine-capability-aeschbach/Wed, 10 Apr 2024 12:36:35 +0000NATIONAL HARBOR, MD. — Demand for information warfare capabilities is growing and will stake a larger presence among the undersea community in the near future, according to one U.S. Navy commander.

“I can’t say it enough. We are in demand, more in demand than we’ve ever been, and that will continue to increase,” Vice Adm. Kelly Aeschbach, leader of Naval Information Forces, said at the Sea-Air-Space conference going on this week in Maryland, just outside the nation’s capital. “My sense is, in the partnership we’ve had, we will have a persistent and growing presence within submarine crews over the coming years.”

The service in 2022 embedded information warfare specialists aboard subs to examine how their expertise aids underwater operations. A follow-up effort is now on the books, with information professional officers and cryptologic technicians joining two East Coast-based subs: the Delaware and the California.

The trials have so far proven fruitful, according to Aeschbach, who is colloquially known as “IBoss.” That said, staffing and other resources need considering before any sweeping moves are made.

“It’ll be a slow evolution, I think, as we build out that capacity,” Aeschbach said. “Part of that’s just the reality, as we’ve talked about today, of some of the choices we have to make about investment.”

Del Toro asks Navy contractors to consider taxpayers over shareholders

The Navy is also introducing information warfare systems into its live, virtual and constructive environments. The first few, focused on cryptology, meteorology and oceanography, will be uploaded in the fourth quarter of fiscal 2025, service officials have said. Other disciplines include communications, cryptology and electronic warfare, or the ability to use the electromagnetic spectrum to sense, defend and share data.

Tenets of information warfare — situational awareness, assured command-and-control, and the melding of intel and firepower — have enabled U.S. forces to swat down overhead threats in the Red Sea and Gulf of Aden. They have also assisted retaliatory strikes across the Greater Middle East.

Aeschbach cited as an example the USS Carney’s performance. The guided-missile destroyer and its crew spent the past six months intercepting attack drones and missiles launched by Iran-backed Houthi rebels in Yemen.

“Carney had the means to put the right weapon on the target through kinetic means,” Aeschbach said. “She also had the ability to defend herself. And, in all of that, the only part that I tell people is not information warfare is the operator pressing the button to release the missile.”

]]>
Colin Demarest
<![CDATA[Maxar wins US Army One World Terrain simulation contract]]>https://www.c4isrnet.com/information-warfare/2024/03/12/maxar-wins-us-army-one-world-terrain-simulation-contract/https://www.c4isrnet.com/information-warfare/2024/03/12/maxar-wins-us-army-one-world-terrain-simulation-contract/Tue, 12 Mar 2024 13:38:41 +0000The U.S. Army has again turned to Maxar Intelligence to build out immersive training and simulation software.

The geospatial intelligence company on March 12 said it won the latest phase of prototype work on the service’s One World Terrain, which compiles realistic and, in some cases, extremely accurate digital maps of territory across the globe for military purposes.

A Maxar spokesperson said the company was “not disclosing any financial details” of the deal. The announcement did not say how long the effort was expected to take. Three phases of the OWT prototype project were previously said to be worth nearly $95 million.

OWT is considered a critical component of the Army’s Synthetic Training Environment. The STE is meant to provide soldiers convincing, common and automatically generated environments in which they can study tactics and rehearse missions. Virtual worlds afford troops additional means of training while also saving real-world ammunition, fuel and wear and tear.

A soldier in Tampa, Florida, reviews the Army's One World Terrain software. (Photo provided/U.S. Army)

Maxar’s fourth phase of work will focus on delivering enhanced, believable terrain. The company has been involved with OWT since 2019.

“This latest award reflects the unique value of our 3D geospatial data for military simulation use cases,” Susanne Hake, Maxar’s general manager for U.S. government, said in a statement. “Our data, derived from our Precision3D product, offers an extremely accurate 3D representation of Earth, including real textures and superior accuracy of 3 [meters] in all dimensions.”

Maxar was acquired by private equity firm Advent International in 2022 in an arrangement worth $6.4 billion. Following the buy, the company reorganized into two businesses: Maxar Intelligence and Maxar Space Infrastructure.

]]>
Maxar Intelligence
<![CDATA[NATO navigates fine line between transparency, information security]]>https://www.c4isrnet.com/global/europe/2024/03/08/nato-navigates-fine-line-between-transparency-information-security/https://www.c4isrnet.com/global/europe/2024/03/08/nato-navigates-fine-line-between-transparency-information-security/Fri, 08 Mar 2024 18:53:47 +0000Korzeniewo, POLAND – “We are ready.”

The three-word statement was highlighted in bold letters at the opening of NATO’s March 4 briefing, on the occasion of the Polish-leg of the alliance’s largest military exercise since 1988.

But even amid the resolute and calm tone of officials in the room, there was a palpable sense of apprehension among reporters.

A core theme of the speeches presented by NATO representatives revolved around transparency, specifically in showcasing what the Steadfast Defender exercise — and its subsidiary drill Dragon, led by Poland — would involve. Yet many were wary of answering questions related to Russia or lessons learned from the war in Ukraine.

On several occasions, officials were pressed about whether they had concerns over revealing their plans to Russia through events such as these, or even the possibility the Kremlin could intercept operational details.

“Of course we are concerned, everyone is concerned,” Brig. Gen. Gunnar Bruegner, assistant chief of staff at NATO’s Supreme Headquarters Allied Powers Europe, told Defense News. “[We need] to make sure we are safeguarding the critical information, but it does not relieve us from the requirement of making these exercises happen.”

“It is quite a balance you need to keep; you cannot showcase everything,” he said.

During a March 4 news conference, Maj. Gen. Randolph Staudenraus, director of strategy and policy at NATO’s Allied Joint Force Command Brunssum, told reporters that while the alliance does protect its communications, “we are also really trying to be transparent.”

The fine line between accountability and information security is one that some NATO members have recently grappled with. A notable example is the leak of a German discussion about potentially providing Ukraine iwth Taurus missiles. Russia intercepted audio from the web conference between German Air Force officials.

Through this, Moscow was able to get its hands on information regarding the potential supply of cruise missiles to Ukraine as well as operational scenarios of how the war could play out.

Russian officials said last month that the country views Steadfast Defender as a threat.

When it comes to that training event, Bruegner said, details provided to the media during briefings are meant to illustrate the bigger picture, but only in broad terms.

“The plans themselves and the details in there will not be made available to everyone. What you’re seeing here are slides NATO has unclassified,” he explained.

He also noted that an objective of the exercises is to showcase the integration of capabilities, and not necessarily what NATO would do in a contested setting.

“We for sure would not fly banners on the amphibious devices in a contested exercise, which would have involved having an opponent on the other side of the eastern benches of the river and would’ve looked different [than what we saw in the Dragon drill],” Bruegner said.

]]>
Jackie Faye Burton
<![CDATA[Inside Chinese hacking company’s culture of influence, alcohol and sex]]>https://www.c4isrnet.com/cyber/2024/03/08/inside-chinese-hacking-companys-culture-of-influence-alcohol-and-sex/https://www.c4isrnet.com/cyber/2024/03/08/inside-chinese-hacking-companys-culture-of-influence-alcohol-and-sex/Fri, 08 Mar 2024 11:52:50 +0000(AP) — The hotel was spacious. It was upscale. It had a karaoke bar. The perfect venue, the CEO of the Chinese hacking company thought, to hold a Lunar New Year banquet currying favor with government officials. There was just one drawback, his top deputy said.

“Who goes there?” the deputy wrote. “The girls are so ugly.”

So goes the sordid wheeling and dealing that takes place behind the scenes in China’s hacking industry, as revealed in a highly unusual leak last month of internal documents from a private contractor linked to China’s government and police. China’s hacking industry, the documents reveal, suffers from shady business practices, disgruntlement over pay and work quality, and poor security protocols.

Private hacking contractors are companies that steal data from other countries to sell to the Chinese authorities. Over the past two decades, Chinese state security’s demand for overseas intelligence has soared, giving rise to a vast network of these private hackers-for-hire companies that have infiltrated hundreds of systems outside China.

Though the existence of these hacking contractors is an open secret in China, little was known about how they operate. But the leaked documents from a firm called I-Soon have pulled back the curtain, revealing a seedy, sprawling industry where corners are cut and rules are murky and poorly enforced in the quest to make money.

Leaked chat records show I-Soon executives wooing officials over lavish dinners and late night binge drinking. They collude with competitors to rig bidding for government contracts. They pay thousands of dollars in “introduction fees” to contacts who bring them lucrative projects. I-Soon has not commented on the documents.

Mei Danowski, a cybersecurity analyst who wrote about I-Soon on her blog, Natto Thoughts, said the documents show that China’s hackers for hire work much like any other industry in China.

“It is profit-driven,” Danowski said. “It is subject to China’s business culture — who you know, who you dine and wine with, and who you are friends with.”

Hacking styled as patriotism

China’s hacking industry rose from the country’s early hacker culture, first appearing in the 1990s as citizens bought computers and went online.

I-Soon’s founder and CEO, Wu Haibo, was among them. Wu was a member of China’s first hacktivist group, Green Army — a group known informally as the “Whampoa Academy” after a famed Chinese military school.

Wu and some other hackers distinguished themselves by declaring themselves “red hackers” — patriots who offered their services to the Chinese Communist Party, in contrast to the freewheeling, anarchist and anti-establishment ethos popular among many coders.

In 2010, Wu founded I-Soon in Shanghai. Interviews he gave to Chinese media depict a man determined to bolster his country’s hacking capacity to catch up with rivals. In one 2011 interview, Wu lamented that China still lagged far behind the United States: “There are many technology enthusiasts in China, but there are very few enlightened people.”

With the spread of the internet, China’s hacking-for-hire industry boomed, emphasizing espionage and intellectual property theft.

High-profile hacks by Chinese state agents, including one at the U.S. Office of Personnel Management where personal data on 22 million existing or prospective federal employees was stolen, got so serious that then-President Barack Obama personally complained to Chinese leader Xi Jinping. They agreed in 2015 to cut back on espionage.

For a couple of years, the intrusions subsided. But I-Soon and other private hacking outfits soon grew more active than ever, providing Chinese state security forces cover and deniability. I-Soon is “part of an ecosystem of contractors that has links to the Chinese patriotic hacking scene,” said John Hultquist, chief analyst of Google’s Mandiant cybersecurity unit.

These days, Chinese hackers are a formidable force.

In May 2023, Microsoft disclosed that a Chinese state-sponsored hacking group affiliated with China’s People’s Liberation Army called “Volt Typhoon” was targeting critical infrastructure such as telecommunications and ports in Guam, Hawaii, and elsewhere and could be laying the groundwork for disruption in the event of conflict.

Today, hackers such as those at I-Soon outnumber FBI cybersecurity staff by “at least 50 to one,” FBI director Christopher Wray said January at a conference in Munich.

Seedy state-led industry

Though I-Soon boasted about its hacking prowess in slick marketing PowerPoint presentations, the real business took place at hotpot parties, late night drinking sessions and poaching wars with competitors, leaked records show. A picture emerges of a company enmeshed in a seedy, sprawling industry that relies heavily on connections to get things done.

I-Soon leadership discussed buying gifts and which officials liked red wine. They swapped tips on who was a lightweight, and who could handle their liquor.

I-Soon executives paid “introduction fees” for lucrative projects, chat records show, including tens of thousands of RMB (thousands of dollars) to a man who landed them a 285,000 RMB ($40,000) contract with police in Hebei province. To sweeten the deal, I-Soon’s chief operating officer, Chen Cheng, suggested arranging the man a drinking and karaoke session with women.

“He likes to touch girls,” Chen wrote.

It wasn’t just officials they courted. Competitors, too, were targets of wooing over late night drinking sessions. Some were partners — subcontractors or collaborators on government projects. Others were hated rivals who constantly poached their staff. Often, they were both.

One, Chinese cybersecurity giant Qi Anxin, was especially loathed, despite being one of I-Soon’s key investors and business partners.

“Qi Anxin’s HR is a green tea bitch who seduces our young men everywhere and has no morals,” COO Chen wrote to Wu, the CEO, using a Chinese internet slur that refers to innocent-looking but ambitious young women.

I-Soon also has a complicated relationship with Chengdu 404, a competitor charged by the U.S. Department of Justice for hacking over 100 targets worldwide. They worked with 404 and drank with their executives but lagged on payments to the company and were eventually sued over a software development contract, Chinese court records show.

The source of the I-Soon documents is unclear, and executives and Chinese police are investigating. And though Beijing has repeatedly denied involvement in offensive hacking, the leak illustrates I-Soon and other hacking companies’ deep ties with the Chinese state.

For example, chat records show China’s Ministry of Public Security gave companies access to proofs of concept of so-called “zero days”, the industry term for a previously unknown software security hole. Zero days are prized because they can be exploited until detected. I-Soon company executives debated how to obtain them. They are regularly discovered at an annual Chinese state-sponsored hacking competition.

In other records, executives discussed sponsoring hacking competitions at Chinese universities to scout for new talent.

Many of I-Soon’s clients were police in cities across China, a leaked contract list showed. I-Soon scouted for databases they thought would sell well with officers, such as Vietnamese traffic data to the southeast province of Yunnan, or data on exiled Tibetans to the Tibetan regional government.

At times, I-Soon hacked on demand. One chat shows two parties discussing a potential “long-term client” interested in data from several government offices related to an unspecified “prime minister.”

A Chinese state body, the Chinese Academy of Sciences, also owns a small stake in I-Soon through a Tibetan investment fund, Chinese corporate records show.

I-Soon proclaimed their patriotism to win new business. Top executives discussed participating in China’s poverty alleviation scheme — one of Chinese leader Xi Jinping’s signature initiatives — to make connections. I-Soon CEO Wu suggested his COO become a member of Chengdu’s People’s Political Consultative Conference, a government advisory body comprised of scientists, entrepreneurs, and other prominent members of society. And in interviews with state media, Wu quoted Mencius, a Chinese philosopher, casting himself as a scholar concerned with China’s national interest.

But despite Wu’s professed patriotism, leaked chat records tell a more complicated story. They depict a competitive man motivated to get rich.

“You can’t be Lei Feng,” Wu wrote in private messages, referring to a long-dead Communist worker held up in propaganda for generations as a paragon of selflessness. “If you don’t make money, being famous is useless.”

Lax security, low pay

China’s booming hackers-for-hire industry has been hit by the country’s recent economic downturn, leading to thin profits, low pay and an exodus of talent, the leaked documents show.

I-Soon lost money and struggled with cash flow issues, falling behind on payments to subcontractors. In the past few years, the pandemic hit China’s economy, causing police to pull back on spending that hurt I-Soon’s bottom line. “The government has no money,” I-Soon’s COO wrote in 2020.

Staff are often poorly paid. In a salary document dated 2022, most staff on I-Soon’s safety evaluation and software development teams were paid just 5,600 yuan ($915) to 9,000 yuan ($1,267) a month, with only a handful receiving more than that. In the documents, I-Soon officials acknowledged the low pay and worried about the company’s reputation.

Low salaries and pay disparities caused employees to complain, chat records show. Leaked employee lists show most I-Soon staff held a degree from a vocational training school, not an undergraduate degree, suggesting lower levels of education and training. Sales staff reported that clients were dissatisfied with the quality of I-Soon data, making it difficult to collect payments.

I-Soon is a fraction of China’s hacking ecosystem. The country boasts world-class hackers, many employed by the Chinese military and other state institutions. But the company’s troubles reflect broader issues in China’s private hacking industry. The country’s cratering economy, Beijing’s tightening controls and the growing role of the state has led to an exodus of top hacking talent, four cybersecurity analysts and Chinese industry insiders told The Associated Press.

“China is no longer the country we used to know. A lot of highly skilled people have been leaving,” said one industry insider, declining to be named to speak on a sensitive topic. Under Xi, the person added, the growing role of the state in China’s technology industry has emphasized ideology over competence, impeded pay and made access to officials pivotal.

A major issue, people say, is that most Chinese officials lack the technical literacy to verify contractor claims. So hacking companies prioritize currying favor over delivering excellence.

In recent years, Beijing has heavily promoted China’s tech industry and the use of technology in government, part of a broader strategy to facilitate the country’s rise. But much of China’s data and cybersecurity work has been contracted out to smaller subcontractors with novice programmers, leading to poor digital practices and large leaks of data.

Despite the clandestine nature of I-Soon’s work, the company has surprisingly lax security protocols. I-Soon’s offices in Chengdu, for example, have minimal security and are open to the public, despite posters on the walls of its offices reminding employees that “to keep the country and the party’s secrets is every citizen’s required duty.” The leaked files show that top I-Soon executives communicated frequently on WeChat, which lacks end-to-end encryption.

The documents do show that staff are screened for political reliability. One metric, for example, shows that I-Soon checks whether staff have any relatives overseas, while another shows that employees are classified according to whether they are members of China’s ruling Communist Party.

Still, Danowski, the cybersecurity analyst, says many standards in China are often “just for show.” But at the end of the day, she added, it may not matter.

“It’s a little sloppy. The tools are not that impressive. But the Ministry of Public Security sees that you get the job done,” she said of I-Soon. “They will hire whoever can get the job done.”

___

Soo reported from Hong Kong. AP Technology Writer Frank Bajak in Boston contributed to this report.

]]>
Dake Kang
<![CDATA[Germany, Russia spar over a leaked audio on missiles for Ukraine]]>https://www.c4isrnet.com/global/europe/2024/03/04/germany-russia-spar-over-a-leaked-audio-on-missiles-for-ukraine/https://www.c4isrnet.com/global/europe/2024/03/04/germany-russia-spar-over-a-leaked-audio-on-missiles-for-ukraine/Mon, 04 Mar 2024 15:19:29 +0000BERLIN (AP) — The German government on Monday vehemently rejected allegations that Russia’s leak of a conversation by high-ranking German military officers was an indication that Berlin was preparing for war against Russia.

At the same time, the government sought to contain the domestic fallout from the leak and promised a quick investigation into how it was possible that a conversation by top German military personnel could be intercepted and published.

“It is absolutely clear that such claims that this conversation would prove, that Germany is preparing a war against Russia, that this is absurdly infamous Russian propaganda,” a spokesman for German Chancellor Olaf Scholz told reporters in Berlin.

Government spokesman Wolfgang Buechner said the leak was part of Russia’s “information war” against the West, and that the aim was to create discord within Germany.

The 38-minute recording features military officers discussing in German how Taurus long-range cruise missiles could be used by Kyiv against invading Russian forces. While German authorities haven’t questioned the authenticity of the recording, Scholz said a week ago that delivering these weapons to Ukraine isn’t an option — and that he doesn’t want Germany to be drawn into the war directly.

Russia’s foreign ministry, however, on Monday threatened Germany with “dire consequences” in connection with the leak, though it didn’t elaborate.

“If nothing is done, and the German people do not stop this, then there will be dire consequences first and foremost for Germany itself,” foreign ministry spokesperson Maria Zakharova said.

Relations between the two countries have continuously deteriorated since Russia invaded Ukraine two years ago.

The audio leak was posted Friday by Margarita Simonyan, chief editor of Russian state-funded television channel RT, on social media on Friday, the same day that late opposition politician Alexei Navalny was laid to rest after his still-unexplained death two weeks ago in an Arctic penal colony. It also surfaced just weeks before Russia’s presidential election.

In the leaked audio, four officers, including the head of Germany’s Air Force, Ingo Gerhartz, can be heard discussing deployment scenarios for Taurus missiles in Ukraine before a meeting with Defense Minister Boris Pistorius, German news agency dpa reported.

The officers then states that early delivery and rapid deployment of Taurus long-range cruise missiles would only be possible with the participation of German soldiers — and that training Ukrainian soldiers to deploy the Taurus on their own would be possible, but would take months.

The recording also shows the German government hasn’t given its OK for the delivery of the cruise missiles sought by Ukraine, dpa reported.

There has been a debate in Germany on whether to supply the missiles to Ukraine as Kyiv faces battlefield setbacks, and while military aid from the United States is held up in Congress. Germany is now the second-biggest supplier of military aid to Ukraine after the U.S., and is further stepping up its support this year.

Scholz’s insistence last week on not giving Taurus missiles to Ukraine came after Germany stalled for months on the country’s desire for Taurus missiles, which have a range of up to 500 kilometers (310 miles) and could in theory be used against targets far inside Russian territory.

On Monday, the chancellor reiterated his stance during a visit at a school in Sindelfingen in southwestern Germany.

“I’m the chancellor and that’s why it’s valid,” he said regarding his “no” to the delivery of Taurus missiles, dpa reported.

Also on Monday, Germany’s ambassador visited Russia’s foreign ministry in Moscow. While Russia media reported that Ambassador Alexander Graf Lambsdorff had been summoned by the foreign ministry, the German government said his visit had been planned well before the audio was published.

Germany’s defense ministry tried to downplay the significance of the officers’ conversation in the leak — saying it was merely an “exchange of ideas” before a meeting with the defense minister.

The ministry said it was investigating how it was possible that a conversation by top German military personnel could be intercepted and leaked by the Russians. It promised to report of its findings. Several German media have reported that the officers were in a WebEx meeting when they were taped.

Buechner, the spokesman of the chancellor, said that the German government would also look into how “we can better counter targeted disinformation, especially from Russia.”

The Kremlin on Monday said that it looked forward to the results of the German government’s investigation.

“Mr. Scholz said that a fast, complete and effective investigation would be carried out. We hope that that we will be able to find out the outcome of that investigation,” spokesperson Dmitry Peskov said.

Katie Marie Davies reported from Manchester, United Kingdom.

]]>
<![CDATA[How US Navy leaders see power evolving at ‘dawn of the information age’]]>https://www.c4isrnet.com/information-warfare/2024/03/04/how-us-navy-leaders-see-power-evolving-at-dawn-of-the-information-age/https://www.c4isrnet.com/information-warfare/2024/03/04/how-us-navy-leaders-see-power-evolving-at-dawn-of-the-information-age/Mon, 04 Mar 2024 12:00:00 +0000SAN DIEGO — The sounds of progressive-rock icon Rush crackled through the speakers as the commander of U.S. Pacific Fleet approached the lectern at the West naval conference in California.

Adm. Samuel Paparo, recognizing the tune, smiled as he grabbed the microphone. He then quipped about past lives and “a lot of illegal things” that took place as he came of age in the 1980s.

Times have since changed, and some have found religion in the military, Paparo said. And, much like the times, the demands of a modern-day warrior with a mean, mean stride are constantly evolving.

“We are in the middle of another epochal change,” he said. “And that is the dawn — and I do mean the dawn — of the information age.”

As the U.S. Defense Department prepares for potential confrontations with Russia or China and juggles counterterrorism operations in the Greater Middle East and Africa, it’s emphasizing data: how it’s collected; how it’s shared; and how it can be weaponized. But by some of the department’s own measures, including the 2023 Strategy for Operations in the Information Environment, it is falling behind.

U.S. Navy Adm. Samuel Paparo is the commander of the Pacific fleet. (Colin Demarest/Staff)

State actors and extremist groups alike have long exploited the information ecosystem in an attempt to distort or degrade U.S. standing. Operating online and below the threshold of armed conflict ducks the consequences of physical battles, where personnel levels, accumulated stockpiles and technology budgets can make all the difference.

The Navy in November published a 14-page document laying out how it plans to get up to speed, arguing neither ship nor torpedo alone will strike the decisive blow in future fights. Rather, it stated, a marriage of traditional munitions and exquisite software will win the day.

The thinking was prominent at the AFCEA- and U.S. Naval Institute-hosted West confab, where Paparo and other leaders spoke, and where some of the world’s largest defense contractors mingled and hawked their wares. Signs promised secure connectivity. Other screens advertised warrens of computerized pipes and tubes through which findings could flow.

“Who competes best in this — who adapts better, who’s better able to combine data, computing power and [artificial intelligence], and who can win the first battle, likely in space, cyber and the information domain — shall prevail,” Paparo said.

Subs and simulation

The U.S. has sought to invigorate its approach to information warfare, a persuasive brew of public outreach, offensive and defensive electronic capabilities, and cyber operations that can confer advantages before, during and after major events. Rapidly deployable teams of information forces that can shape public perceptions are a must, the Defense Department has said, as is a healthy workforce comprising military and civilian experts.

The Navy in 2022 embedded information warfare specialists aboard submarines to study how their expertise may aid underwater operations. That pilot program is now advancing into a second phase, with information professional officers and cryptologic technicians joining two East Coast subs, the Delaware and the California.

Years prior, the service made information warfare commanders fixtures of carrier strike groups.

“This is the first and the most decisive battle,” said Paparo, who previously told Congress that Indo-Pacific Command, his future post, is capable of wielding deception to alter attitudes and behaviors. “The information age will not necessarily replace some of the more timeless elements of naval combat, maneuver and fires, but will in fact augment them.”

Del Toro asks Navy contractors to consider taxpayers over shareholders

Tenets of information warfare — situational awareness, assured command and control, and the confluence of intelligence and weaponry — have enabled U.S. forces to bat down overhead threats in the Red Sea and the Gulf of Aden while also assisting retaliatory strikes across the Greater Middle East.

Warships Carney, Gravely, Laboon, Mason and Thomas Hudner have destroyed more than 70 drones and seven cruise missiles since the start of the Israel-Hamas war in October. Prior knowledge regarding the armaments and installations of Iran and the Houthi rebel group in Yemen has made it easier, albeit dangerous.

Information warfare “underpins every single warfare mission in the Navy today,” Elizabeth Nashold, the deputy at Naval Information Forces Command, said at the West conference. “You name that mission, and there’s an IW component to it.”

“USS Carney was ready on Day 1,” she added. “We are in an information age, and we are seeing proliferating technologies, and we have to keep up.”

Ensuring sailors are well-versed in information warfare has proved tricky, as the sensitivity of tools employed clashes with the always-alert eyes and ears of Russia and China. The U.S. Navy has for years wanted to flesh out its simulation and gaming environments to bridge the gap, but has run into both engineering and bureaucratic walls.

“A lot of our IW capabilities are at a higher classification level than what we see in the current live, virtual and constructive operating environment,” Nashold said. “The other challenge is really just getting all the different IW capabilities into LVC.”

The Navy plans to introduce 20 information warfare systems into its live, virtual and constructive environments. The first few — focused on cryptology, meteorology and oceanography — will be uploaded in the fourth quarter of fiscal 2025, according to Nashold. Other information warfare disciplines include communications, cryptology and electronic warfare, or the ability to use the electromagnetic spectrum to sense, defend and communicate.

The USS Chosin slips through the San Diego Bay, near the bridge to Coronado, on Feb. 15, 2024, following a modernization period. (Colin Demarest/C4ISRNET)U.S. Chief of Naval Operations Adm. Lisa Franchetti speaks with an interviewer Feb. 13, 2024, on the sidelines of the West conference in San Diego. (Colin Demarest/C4ISRNET)A model aircraft carrier is displayed at defense contractor HII's booth at the West conference in San Diego. (Colin Demarest/C4ISRNET)U.S. Navy Adm. Samuel Paparo, commander of the Pacific Fleet, on Feb. 14, 2024, said militaries are operating amid an A jet is seen flying near the San Diego Convention Center on Feb. 15, 2024, during the final day of the West naval conference. (Colin Demarest/C4ISRNET)Gen. Christopher Mahoney, the U.S. Marine Corps assistant commandant, answers an audience question Feb. 15, 2024, at the West naval conference in San Diego. (Colin Demarest/C4ISRNET)A woman engages with a shooting simulation at the General Dynamics Information Technology booth at the West 2024 conference in San Diego. (Colin Demarest/C4ISRNET)U.S. Navy Secretary Carlos Del Toro told defense contractors at West 2024 to prioritize weapons deliveries and production investments over greedy stock market maneuvering. (Colin Demarest/C4ISRNET)A Trackfire Remote Weapon Station made by Saab is seen on the West 2024 show floor Feb. 13. (Colin Demarest/C4ISRNET)U.S. Navy Rear Adm. Michael Brookes, the Office of Naval Intelligence boss, told reporters at West 2024 that Houthi rebels posed little threat to undersea cables in the Red Sea region. (Colin Demarest/C4ISRNET)Two Chinook helicopters fly over the San Diego Bay on Feb. 15, 2024, during the final day of the West naval conference. (Colin Demarest/C4ISRNET)U.S. Navy Vice Adm. Craig Clapperton, the leader of U.S. Fleet Cyber Command, listens to a question at the West conference in San Diego on Feb. 13, 2024. (Colin Demarest/C4ISRNET)

Additionally, the service is eyeing ashore training facilities in locations where sailors and sea power are concentrated: California, Virginia and Japan.

“They’re basically going to be places where sailors can come and actually train to execute those capabilities,” Nashold said. “Once we get into the environment, our IW sailors can run all of our IW capabilities concurrently, and they can actually innovate and iterate and practice over and over again.”

Safe and secure

The Navy’s inaugural cyber strategy, published late last year, underlined the value of virtual weaponry. Non-kinetic effects — capable of wreaking havoc on electronic guts and subsystems — will prove increasingly potent as militaries adopt interlinked databases and units.

In the U.S., the vision of connecting once-disparate forces across land, air, sea, space and cyberspace is known as Combined Joint All-Domain Command and Control. In China, the weaving of command and control, communications, computers, intelligence, surveillance and reconnaissance to quickly coordinate firepower is known as Multi-Domain Precision Warfare.

The former considers the latter a top-tier national security hazard. Defense Department documents describe Beijing as determined to reorient international power in its favor. Such ambitions can take many forms, economic and narrative among them.

“We all know information is combat power, and really the next fight is an information-domain fight just as much as a physical, kinetic fight,” Jane Rathbun, the Navy’s chief information officer, said at West. “We want to make sure that we provide our sailors and Marines with trustworthy, secure information at the time of need.”

Shuttling intel back-and-forth runs the risk of interception or poisoning. Tampering could go undetected, as well, exposing troops to unnecessary risks down the line.

Houthis, Russians wield same Iranian-supplied drones, DIA studies show

“The golden rule in the information superiority vision is the right data, the right place, the right time, securely. And the ‘securely’ piece is critically important,” Rathbun said.

The service’s cyber blueprint identifies critical infrastructure — such as bases, far-flung logistics nodes, and food and water supply chains — as a soft underbelly in need of thicker insulation.

The Five Eyes intelligence-sharing alliance, made up of Australia, Canada, New Zealand, the U.K. and the U.S., warned in May a Chinese espionage group slipped past digital defenses in Guam and other locations. Microsoft had detected the breach and attributed it to a group known as Volt Typhoon. A successful cyberattack on infrastructure in Guam or other Indo-Pacific footholds could cripple U.S. military capabilities in the area.

“Volt Typhoon is out there, it’s real,” Scott St. Pierre, who serves as the Navy’s principal cyber adviser, said at West.

“We’re in battle today,” he added. “Information is power.”

]]>
Petty Officer 2nd Class David Ne
<![CDATA[Russian disinformation is on immigration. The real target is Ukraine.]]>https://www.c4isrnet.com/information-warfare/2024/03/01/russian-disinformation-is-on-immigration-the-real-target-is-ukraine/https://www.c4isrnet.com/information-warfare/2024/03/01/russian-disinformation-is-on-immigration-the-real-target-is-ukraine/Fri, 01 Mar 2024 14:04:42 +0000For Vladimir Putin, victory in Ukraine may run through Texas’ Rio Grande Valley.

In recent weeks, Russian state media and online accounts tied to the Kremlin have spread and amplified misleading and incendiary content about U.S. immigration and border security. The campaign seems crafted to stoke outrage and polarization before the 2024 election for the White House, and experts who study Russian disinformation say Americans can expect more to come as Putin looks to weaken support for Ukraine and cut off a vital supply of aid.

In social media posts, online videos and stories on websites, these accounts misstate the impact of immigration, highlight stories about crimes committed by immigrants, and warn of dire consequences if the U.S. doesn’t crack down at its border with Mexico. Many are misleading, filled with cherry-picked data or debunked rumors.

The pivot toward the U.S. comes after two years in which Russia’s vast disinformation apparatus was busy pushing propaganda and disinformation about its invasion of Ukraine. Experts who study how authoritarian states use the internet to spread disinformation say eroding support for Ukraine remains Russia’s top priority — and that the Kremlin is just finding new ways to do it.

“Things have shifted, even in the last few days,” said Kyle Walter, head of research at Logically, a tech company that tracks disinformation campaigns. While experts and government officials have long warned of Russia’s intentions, Walter said the content spotted so far this year “is the first indication that I’ve seen that Russia is actually going to focus on U.S. elections.”

This month Logically identified dozens of pro-Russian accounts posting about immigration in the U.S., with a particular interest in promoting recent anti-immigration rallies in Texas. A recent Logically assessment concluded that after two years spent largely dedicated to the war in Ukraine, Russia’s disinformation apparatus has “started 2024 with a focus on the U.S.”

Many posts highlight crimes allegedly committed by recent immigrants or suggest migrants are a burden on local communities. Some claims were posted by accounts with tiny audiences; others were made by state media sites with millions of followers.

This week the accounts seized on the recent death of a Georgia nursing student and the arrest of a Venezuelan man who had entered the U.S. illegally and was allowed to stay to pursue his immigration case. The killing that quickly became a rallying cry for former President Donald Trump and other Republicans who suggest that migrants commit crimes more often than do U.S. citizens. The evidence does not support those claims.

The content, crafted in English, has quickly found its way to websites and platforms popular with American voters. Footage of a recent anti-immigration protest broadcast by Russian outlet RT, for example, was racking up thousands of views this week on X, the platform formerly known as Twitter, and prompting angry replies from other users.

The Russian outlet Sputnik ran a story this week about growing calls to build a U.S.-Mexico border wall, a priority for Trump, who failed to complete the job as president. An analysis of other sites that later linked to the Sputnik piece shows than half were in the U.S., according to data from the online analytics firm Semrush.com. Overall, Americans make up the English-language Sputnik’s largest audience.

U.S. officials have warned that Russia could seek to meddle in the elections of dozens of countries in 2024, when more than 50 nations accounting for half of the world’s population are scheduled to hold national votes. While Russia has a strategic interest in the outcome of many of them — the European Parliament, for one — few offer the opportunity and the prize that America does.

For Russia’s bid to conquer Ukraine, this year’s U.S. election stakes couldn’t be higher. President Joe Biden has pledged to fully back Ukraine. Republicans have been far less supportive. Trump has openly praised Putin and the former president has suggested he would encourage Russia to attack America’s NATO allies if they don’t pay their fair share for the military alliance.

More than half of Republicans believe the U.S. is spending too much on Ukraine, according to a recent poll from The Associated Press-NORC Center for Public Affairs Research that found Democrats to be much more supportive of additional aid.

Soon after the war started, Russia mounted a disinformation campaign designed to cut into support for Ukraine. Claims included wild stories about secret U.S. germ warfare labs or Nazi conspiracies or that Ukrainian refugees were committing crimes and taking jobs from people who had welcomed them.

That effort continues, but Russia also has shifted its attention to issues with no obvious tie to Moscow that are more likely to create cracks in the unity of its adversaries — for example immigration, or inflation, high-profile topics in the U.S. and Europe.

“They’re very savvy and understand the right buttons to push,” said Bret Schafer, senior fellow and head of the information manipulation team at the Alliance for Securing Democracy, a Washington-based nonprofit. “If your ultimate objective is to reduce support for Ukraine, your inroad might be talking about how bad things are on the southern border. Their path to win this thing is to get the U.S. and the E.U. to stop sending weapons and aid to Ukraine.”

A message left with the Russian Embassy in Washington wasn’t immediately returned.

America’s election may also be a tempting target for other authoritarian nations such as China and Iran that, like Russia, have shown a willingness to use online propaganda and disinformation to further their objectives.

The online landscape has dramatically shifted since Russia sought to meddle in America’s 2016 presidential race won by Trump. Platforms such as Facebook and Instagram have banned many Russian state accounts and built new safeguards aimed at preventing anyone from exploiting their sites. In one recent example, Meta, the owner of Facebook, announced last fall that it had identified and stopped a network of thousands of fake accounts created in China in an apparent effort to fool American voters.

Other platforms, including X, have taken a different approach, rolling back or even eliminating content moderation and rules designed to stop disinformation. Then there is TikTok, whose ties to China and popularity with young people have set off alarms in several state capitals and Washington.

Artificial intelligence is another concern. The technology now makes it easier than ever to create audio or video that is lifelike enough to fool voters.

Social media is no longer the only battleground either. Increasingly, Russia and other disinformation spreaders use encrypted messaging sites or websites that masquerade as legitimate news outlets.

“A lot of their activity has moved off the major platforms to places were they can operate more freely,” said John Hultquist, chief analyst at Mandiant Intelligence, a cybersecurity firm monitoring Russian disinformation.

Walter, Logically’s research director, said he is most concerned about disinformaton on X and TikTok this year, given their lack of controls and their popularity, especially with young voters. TikTok’s ties to China have raised national security concerns.

He said that while election years tend to highlight the dangers of disinformation, the most effective information operations are launched years in advance. America’s adversaries have spent a long time studying its politics, building online networks and cultivating domestic divisions.

Now comes the payoff.

“They don’t need to put a ton of effort into causing disinformation,” Walter said. “They’ve already laid the groundwork leading up to 2024.”

]]>
Eric Gay
<![CDATA[Houthi rebels present little threat to undersea cables, US admiral says]]>https://www.c4isrnet.com/information-warfare/2024/02/14/houthi-rebels-present-little-threat-to-undersea-cables-us-admiral-says/https://www.c4isrnet.com/information-warfare/2024/02/14/houthi-rebels-present-little-threat-to-undersea-cables-us-admiral-says/Wed, 14 Feb 2024 21:08:09 +0000SAN DIEGO — Threats by Houthi rebels based in Yemen to slice a lattice of undersea cables in the Red Sea region, jeopardizing communications and financial data, are unrealistic, according to the commander of the Office of U.S. Naval Intelligence.

Veiled threats were published on social media channels associated with the Iran-backed militant group in December and were later amplified by Hezbollah, according to the Middle East Media Research Institute. The posts featured maps of the area, including the Bab-el-Mandeb Strait, where fiber-optic cables run, and chatter about international connectivity.

Asked at the West conference in San Diego if he thought such threats were realistic — if extremists could execute such sabotage — U.S. Navy Rear Adm. Michael Brookes said: “I’m not so sure it is.”

“The Russian Navy remains the greatest threat to the U.S. in the undersea domain,” said Brookes, whose organization collects, analyzes and distributes intelligence about foreign forces. Brookes was named commander in July.

U.S. Navy Rear Adm. Michael Brookes, the Office of Naval Intelligence boss, listens to a reporter's question Feb. 13, 2024, at the West conference in San Diego. (Colin Demarest/C4ISRNET)

The Houthis and other Iranian-armed militias have launched a series of attacks on commercial vessels and U.S. forces across the Greater Middle East in recent weeks, prompting the U.S. to respond with retaliatory strikes in Yemen and Iraq. Houthi forces have used explosives-laden unmanned systems in the air and on the water as well as anti-ship ballistic missiles.

Due to the confined geography of the Red Sea and the speed of anti-ship munitions, there may only be a 60- to 90-second window between Houthi launch and a destroyer’s response in self-defense, Navy Times reported, citing a service leader.

The barrages in the Red Sea have impeded trade in otherwise economically vital waters. Ships are rerouting to avoid the region, resulting in many more nautical miles traversed, additional days spent traveling and extra fuel burned.

The Houthis need not turn to undersea cables to “continue to pose a significant maritime threat or to be able to continue escalating in the maritime domain,” Behnam Ben Taleblu, an Iran specialist at the Foundation for Defense of Democracies think tank in Washington, D.C., said in an interview. “I would expect more anti-ship missile and drone attacks.”

]]>
<![CDATA[Ukraine decodes Russian ‘carrots’ and ‘tea bags’ chatter to save lives]]>https://www.c4isrnet.com/battlefield-tech/2024/02/14/ukraine-decodes-russian-carrots-and-tea-bags-chatter-to-save-lives/https://www.c4isrnet.com/battlefield-tech/2024/02/14/ukraine-decodes-russian-carrots-and-tea-bags-chatter-to-save-lives/Wed, 14 Feb 2024 13:19:09 +0000As the radio crackles with enemy communications that are hard to decipher, one Russian command rings out clear: “Brew five Chinese tea bags on 38 orange.”

A Ukrainian soldier known on the battlefield as Mikhass, who has spent months listening to and analyzing such chatter, is able to quickly decode the gibberish. It means: Prepare five Beijing-made artillery shells and fire them on a specific Ukrainian position in the Serebryansky Forest, which forms the front line in the country’s restive northeast.

Hiding in the basement of an abandoned home 12 kilometers (7 miles) away, Mikhass immediately warns the commander of a unit embedded in that part of the forest, giving him crucial minutes to get his men into trenches, saving their lives.

On the defensive and critically short of ammunition and soldiers after two years of war, Ukrainian forces are increasingly resorting to an age-old tactic — intelligence gleaned from radio intercepts — in a desperate effort to preserve their most vital resources.

The painstaking work is part of a larger effort to beef up and refine electronic warfare capabilities so that soldiers can be warned earlier of impending attacks, while having the battlefield intelligence needed to make their own strikes more deadly. To prevent enemy drone attacks, signal-jamming is also on the rise.

After months of near stalemate along the 1,000 kilometer (621 mile) front line, Ukraine expects fierce attacks in the year ahead from a Russian enemy determined to wear down its defenses to forge a breakthrough. Russian President Vladimir Putin has said there will be no peace until Russia achieves its goals, which include recapturing the entire Donbas region of eastern Ukraine, which it illegally annexed in 2022.

The commander elevated last week to lead Ukraine’s army, Gen. Oleksandr Syrskyi, has highlighted the importance of electronic warfare, and the country’s defense ministry has increased spending on the people and technology behind it.

SAVING LIVES

Russia, which controls about one-fifth of Ukraine, has the advantage of a more developed domestic weapons industry and it uses conscription and coercion to call up troops.

For Ukraine, ammunition shortages have forced brigades to use shells sparingly and only after locating precise targets. Difficulty in mobilizing troops means Ukrainian commanders must be extra protective of soldiers’ lives as they try to fend off ferocious Russian attacks.

It is within this context that better surveillance, eavesdropping and jamming have become more urgent.

Several kilometers south of where Mikhass is positioned, in the Donetsk region town of Konstantinivka, the 93rd Brigade’s Electronic Warfare unit is using jammers to stave off attack drones, the main driver of injuries for soldiers in the region.

The platoon commander is alert, staring at a laptop that shows signals picked up by small antennas planted near the front line. When a Russian Lancet attack drone approaches their area of operation, his screen lights up with activity.

The commander, known on the battlefield as Oleksandr, flips a switch to activate the jammer which interferes with the drone’s radar; it’s the equivalent of shining a bright light in someone’s eye to disorient them.

“It’s a must,” he says of their operation. “A lot of guys are dying because of drones.”

Radio operators like Mikhass work in shifts around the clock.

The antennas he relies on to pick up Russian radio signals are camouflaged, jutting out of trees in the forest near Kreminna, close to Russian positions. From a quiet basement command center nearby, Mikhass and other soldiers chain smoke cigarettes and listen through headphones.

A new and sophisticated signal-finding antenna, which resembles a carousel, uses triangulation to locate where the radio waves are emanating from.

They cross-reference what they hear against images they gather from reconnaissance drones and use detailed maps of their enemy’s positions to slowly piece together what it all means.

They are part of a 50-man intelligence unit dubbed the Bunnies of Cherkess — the name inspired by the Chinese military strategist Sun Tzu, who advised warriors to feign weakness when one is strong.

“No one takes bunnies seriously, right?” said Cherkess, the commander of the eponymous unit.

Radio intercepts reveal that the Kremlin is determined to control the entire Serebryansky Forest, which divides Ukraine-controlled Lyman from Russian-occupied Kreminna. It’s part of an effort to reach Torske, a village in Donetsk that is west of Kreminna. From Torske, Russia will be closer to recapturing the nearby hub of Lyman, which would be a devastating setback for Ukraine and disrupt its ability to move supplies to the front.

DECODING ORDERS

Cherkess and his men, most of whom are volunteers who signed up for the infantry, understand the stakes couldn’t be higher, especially as signs grow that support from Western allies is less secure.

After listening to hours and hours of Russian communications each day, much of it related to troop rotations, artillery fire and drone reconnaissance, they gradually build an understanding — with help from specialized computer software — of what it all means.

“Cucumbers” are mortars, “carrots” are grenade launchers -- and locations are conveyed in a numerical code with a corresponding color. It took the unit months to decode these Russian orders.

The arrival of new combat equipment and ammunition — and especially infantrymen — signals a fresh attack is imminent.

“(A soldier) is not interested in what kind of radar Russians have, he needs information on if there will be an attack tonight, and who will come, if they will have tanks, if they have armored vehicles or if it’s just infantry,” said Cherkess.

“And we have to understand how long we have to prepare. A week? Two weeks? A month?”

Advance word of enemy troops being rotated in and out is also useful to Ukrainian soldiers seeking to go on the offensive, he said. That is when they can exact maximum personnel losses.

The previous week, a Russian assault operation was carried out against a neighboring brigade. But the Ukrainian soldiers positioned there were prepared to greet them.

STAYING AHEAD

The importance of electronic surveillance can’t be underestimated, said Yaroslav Kalinin, the CEO of Infozahyst, a company under contract with Ukraine’s Defense Ministry.

Before the war, Infozahyst provided anti-wiretapping services for the offices of the president and prime minister. Once the war began, the company pivoted to help the army by manufacturing a versatile signal direction finding system, which is now in high demand.

The government recently doubled its contract with Infozahyst, according to Kalinin.

The buildup of surveillance capabilities is partly a recognition of the need to catch up to the Russians, who invested heavily in this technology long before it invaded Ukraine.

Kalinin believes that better and smaller devices that are easier to hide and move around will eventually give Ukraine an edge.

The Russians know they are being listened to and routinely try to deceive their enemy with bogus information. It is up to Mikhass and other radio operators to discern the signal from the noise.

“Their artillery helps us,” he explained. “They say where they will shoot, and then we check where the shells landed.”

“38 orange,” the location Mikhass recently heard about for an upcoming attack, is represented on a map by a small dot. And it is surrounded by hundreds of other dots that signify locations they have decoded.

“We need a lot of time to uncover these points,” he said.

And, as Russia steps up the pressure, the clock is ticking.

]]>
Efrem Lukatsky
<![CDATA[Putin urges US to push Ukraine to talks, hints at swap for reporter]]>https://www.c4isrnet.com/federal-oversight/2024/02/09/putin-urges-us-to-push-ukraine-to-talks-hints-at-swap-for-reporter/https://www.c4isrnet.com/federal-oversight/2024/02/09/putin-urges-us-to-push-ukraine-to-talks-hints-at-swap-for-reporter/Fri, 09 Feb 2024 10:58:05 +0000Russian President Vladimir Putin used an interview with former Fox News host Tucker Carlson to urge Washington to recognize Moscow’s interests and persuade Ukraine to sit down for talks.

Putin also said that Russia stands ready to negotiate a potential prisoner exchange that would free Wall Street Journal reporter Evan Gershkovich, who was detained last March on espionage charges he denies, and hinted that Moscow wants the release of its agent imprisoned in Germany.

Most of the interview, released Thursday, focused on Ukraine, where the war is nearing the two-year mark. Putin repeated his claim that his invasion of Ukraine, which Kyiv and its allies described as an unprovoked act of aggression, was necessary to protect Russian speakers in Ukraine and prevent the country from posing a threat to Russia by joining NATO.

In this photo released by Sputnik news agency on Friday, Feb. 9, 2024, Russian President Vladimir Putin, right, gestures as he speaks during an interview with former Fox News host Tucker Carlson at the Kremlin in Moscow, Russia, Tuesday, Feb. 6, 2024. (Gavriil Grigorov, Sputnik, Kremlin Pool Photo via AP)

Putin pointed at Ukrainian President Volodymyr Zelenskyy’s refusal to conduct talks with the Kremlin. He argued that it’s up to Washington to stop supplying Ukraine with weapons and convince Kyiv, which he called a U.S. “satellite,” to sit down for negotiations.

“We have never refused negotiations,” Putin said. “You should tell the current Ukrainian leadership to stop and come to a negotiating table.”

Putin warned that the West will never succeed in inflicting a “strategic defeat” on Russia in Ukraine and rejected allegations that Russia was harboring plans to attack Poland or other NATO countries.

It was Putin’s first interview with a Western media figure since his full-scale invasion of Ukraine two years ago.

White House national security spokesman John Kirby tried to minimize the impact of Carlson’s interview ahead of its release: “Remember, you’re listening to Vladimir Putin. And you shouldn’t take at face value anything he has to say.”

In Russia, the interview received wall-to-wall coverage in state media on Friday morning, with major TV channels repeatedly airing excerpts and one state news agency describing it in a column as “a dagger blow through the curtain of propaganda of the dishonest media of the civilized world.”

In the days leading up to the release of the interview, Russian Kremlin-backed media also extensively covered Carlson’s visit to Russia, trying to follow him around Moscow and reporting in great detail on where the former Fox News host went.

Putin has heavily limited his contact with international media since he launched the war in Ukraine in February 2022. Russian authorities have cracked down on independent media, forcing some Russian outlets to close, blocking others and ordering a number of foreign reporters to leave the country. Two journalists working for U.S. news organizations — The Wall Street Journal’s Gershkovich and Radio Free Europe/Radio Liberty’s Alsu Kurmasheva — are in jail.

Asked by Carlson whether Russia would release Gershkovich, Putin said Moscow is open to talks but repeated that the reporter was charged with espionage, an accusation Gershkovich has denied.

“He was caught red-handed when he was secretly getting classified information,” Putin said of Gershkovich, adding that he doesn’t exclude that the reporter could return home.

“There is no taboo on settling this issue,” Putin said. “We are ready to solve it but there are certain conditions that are being discussed between special services. I believe an agreement can be reached.”

He pointed to a man imprisoned in a “U.S.-allied country” for “liquidating a bandit” who killed Russian soldiers during the fighting in the Caucasus: “He put our soldiers taken prisoners on a road and then drove a car over their heads. There was a patriot who liquidated him in one of the European capitals.”

Putin didn’t mention names, but he appeared to refer to Vadim Krasikov, a Russian serving a life sentence in Germany after being convicted of the 2019 brazen daylight killing of Zelimkhan “Tornike” Khangoshvili, a 40-year-old Georgian citizen of Chechen ethnicity.

German judges who convicted Krasikov said he had acted on the orders of Russian federal authorities, who provided him with a false identity, a fake passport and the resources to carry out the hit.

The Wall Street Journal reaffirmed in a statement that Gershkovich “is a journalist, and journalism is not a crime,” adding that “any portrayal to the contrary is total fiction.” “We’re encouraged to see Russia’s desire for a deal that brings Evan home, and we hope this will lead to his rapid release and return to his family and our newsroom,” it said.

]]>
Gavriil Grigorov
<![CDATA[Russia says ex-Fox News host Tucker Carlson interviewed Vladimir Putin]]>https://www.c4isrnet.com/information-warfare/2024/02/07/russia-says-ex-fox-news-host-tucker-carlson-interviewed-vladimir-putin/https://www.c4isrnet.com/information-warfare/2024/02/07/russia-says-ex-fox-news-host-tucker-carlson-interviewed-vladimir-putin/Wed, 07 Feb 2024 19:17:50 +0000Russian President Vladimir Putin has been interviewed by former Fox News host Tucker Carlson, the Kremlin confirmed Wednesday. It is Putin’s first interview to a Western media figure since his full-scale invasion of Ukraine two years ago.

Carlson had released a video from Moscow on Tuesday in which he said he would be interviewing Putin. Carlson claimed that Western journalists had interviewed Ukrainian President Volodymyr Zelenskyy multiple times but could not be “bothered” to interview the Russian president.

The interview will be posted Thursday, according to Justin Wells, head of programming at Carlson’s streaming network. It’s not known what was said in the interview.

Putin has heavily limited his contact with international media since he launched the war in Ukraine in February 2022. Russian authorities have cracked down on media, forcing some independent Russian outlets to close, blocking others and ordering a number of foreign reporters to leave the country. Two journalists working for U.S. news organizations — The Wall Street Journal’s Evan Gershkovich and Radio Free Europe/Radio Liberty’s Alsu Kurmasheva — are in jail on charges they reject.

Western journalists were invited to Putin’s annual press conference in December — the first since the war began — but only two were given the chance to ask a question.

WHY TUCKER CARLSON?

Putin’s spokesperson, Dmitry Peskov, told reporters that Carlson was chosen for the interview because “he has a position which differs” from other English-language media.

Before his exit from Fox, Carlson repeatedly questioned the validity of U.S. support for Ukraine following the Russian invasion, and he’s wondered why Americans are told to hate Putin so much. His commentaries were frequently circulated on Russian state-run media.

In his video this week, Carlson denounced “corrupt” American media outlets for “fawning pep sessions” with Zelenskyy that he said were designed to get America into a war with eastern Europe and pay for it.

Peskov also rejected Carlson’s suggestion that no Western journalists had submitted requests to interview Putin. He said the Kremlin has received many requests from large Western television channels and newspapers which, he asserted, “take a one-sided position.” Carlson’s position, Peskov said, “is in no way pro-Russian, nor pro-Ukrainian, but rather pro-American.”

The Associated Press is among the outlets that has requested an interview with Putin.

“Does Tucker really think we journalists haven’t been trying to interview President Putin every day since his full scale invasion of Ukraine?” CNN’s Christiane Amanpour said on X. “It’s absurd — we’ll continue to ask for an interview, just as we have for years now.”

Carlson’s trip comes as he has planted himself on former President Donald Trump’s side in a growing split in the Republican party over Putin and the Ukraine war. Trump has pushed to cut off aid to Ukraine, and the GOP majority controlling the House of Representatives has complied.

While many Republicans are critical of the Russian president and his invasion, Trump and others have argued the United States has no interest helping Ukraine defend itself.

Carlson has also traveled to interview Hungarian Prime Minister Viktor Orban, who has fashioned himself into an icon for conservative populists across the globe and been Putin’s closest ally in the European Union.

HOW THE INTERVIEW WILL BE DISTRIBUTED

The interview with Putin, Carlson said in his video, will be distributed for free on his website and on X, formerly known as Twitter. Carlson, who was fired by Fox News in April, announced he was starting his own streaming service in December.

Fox has offered no explanation for firing Carlson, who was its top-rated personality at the time. Like many people who leave the network that is conservative America’s most popular news outlet, he has struggled to remain in the public eye.

Carlson worked at Fox News for more than a decade and hosted a show where he discussed conspiracy theories about Russia and the Jan. 6 insurrection.

Russian state media on Wednesday extensively covered Carlson’s visit.

Vladimir Solovyev, one of Russia’s most famous television hosts, said the interview would “break through the blockade and the narrative that exists” in Western media which, he said, focuses on “Putin’s unprovoked, brutal invasion of Ukraine.”

Solovyev, whose show is often critical of Western media, said Carlson is “feared” because he does not align with that narrative.

___

Associated Press writer Nicholas Riccardi contributed to this report.

]]>
Alexander Zemlianichenko
<![CDATA[Intel community tools can speed AI adoption]]>https://www.c4isrnet.com/artificial-intelligence/2024/02/02/intel-community-tools-can-speed-ai-adoption/https://www.c4isrnet.com/artificial-intelligence/2024/02/02/intel-community-tools-can-speed-ai-adoption/Fri, 02 Feb 2024 22:01:29 +0000Hamas’ heinous attack against Israel. Houthi missile, drone and maritime attacks on Red Sea shipping. Militant drones striking US troops in Jordan, Iraq, and Syria. CCP brinkmanship in the South China Sea.

While the pace of geo-strategic surprise seemed to quicken last year, the reality is this trend intensified well before Russia’s invasion of Ukraine in 2022. For example, in Africa, where I led US military forces from 2019 to 2022, we responded from a back foot to no less than seven coups, attempted coups or unconstitutional changes of government in less than 24 months.

Our lack of recognition and understanding in the run-up to these coups was especially concerning when they sparked violence, as they often did, near U.S. diplomatic and troop locations.

Whether confronting global or regional competitors or resurgent terror, America and our allies must speed adoption of critical technologies for enhanced awareness – like Artificial Intelligence – to surge deterrence when indicators spike and deliver battlefield dominance should crisis come.

Responding to the rush of world events as well as directives within the 2024 NDAA, the Defense Department is rightly deciding where, how, and when to use AI. As it does, senior leaders should look to the Intelligence Community and US special operations forces to identify the trusted commercial AI tools already available for their use. Doing so can save time and more quickly bring the power of AI to more of America’s warfighters.

A recent report by U.K. and U.S. think tanks RUSI and SCSP articulated that “a certain ruthlessness” in realizing a vision is necessary for defense leaders to transform fast enough to meet the demands of the AI age.

This is key, because from the Sahel to Gaza, from Europe to the Indo-Pacific, deteriorating security conditions demand faster adoption of AI if we want to scale our ability to recognize, understand, and decide at the speed of relevance. If DOD does not lead in this area, the consequences will likely be all too grave in the future.

What Hamas told us

Superior intelligence alone does not deliver security; decision makers need intelligence to recognize and understand, but also the ability to decide and act faster than adversaries.

The world witnessed this on Oct 7 last year: Emerging information suggests intelligence didn’t miss all the indicators; but, for reasons yet to be determined, decision-makers lacked the wider context in which all those data points, combined, painted a convincing picture of looming danger. Like viewing a MagicEye puzzle, AI can provide the right perspective, faster, for decision-makers to better see the context, to recognize, anticipate and interrupt potential disaster in the making.

And for senior leaders who must make highly consequential decisions amidst seemingly endless uncertainty, the more complete the context, the more options available to the leader. AI tools, like some already deployed within the U.S. intelligence and SOF communities, can provide that context for deciders in the broader defense ecosystem. Today, these tools already enhance situational awareness, early warning, and deepen intelligence and context for better informed decision-making.

Though humans still process action through the “Observe, Orient, Decide, Act,” or OODA Loop, AI tools can already vastly accelerate the Observe and Orient phases for decision makers. The next step is to speed the decision and action phases of the cycle. Beyond speeding the decision cycle, AI tools can provide the ability to make it continuous so leaders can decide and act again before the adversary can effectively respond to the first action.

While DOD is working on leveraging AI, the pace of adoption of AI across the services – and the US defense enterprise writ large – is not yet moving at the speed required by the global security environment.

A faster way

There are good efforts already underway to speed this adoption: The office of Chief Digital and Artificial Intelligence Officer and Task Force Lima within DOD have been established to identify the best use cases and problem sets where DOD can rapidly apply AI. The 2024 NDAA also mandated pilot AI programs for the Air Force and Navy. Of course, hand in hand with these efforts, work must continue to streamline DOD’s acquisition and certification processes.

This is progress and what’s required now is calibrating these efforts to reduce the coefficient of friction. To go faster, DOD should more widely deploy already existing, trusted AI tools currently in use within the intelligence and SOF communities. Acknowledging the need for security, doing this would deliver the awareness and decision advantages our troops and leaders need far faster.

What do I mean by ‘decision advantage’? Before a leader takes an action, they should know their adversary’s likely reactions and the options available for counter-actionbefore they take their first action.

Speeding this cycle enables the leader to present the adversary with multiple, compounding dilemmas more rapidly than he can respond. AI is the way to deliver this decision advantage.

There is every reason to do this. As adversaries evade detection in their planning and early execution, seize hostages, or precipitate conflict in already highly destabilized areas; as nations like Russia, China, and Iran trigger crises in critical global chokepoints; the need for the US and allies to extract context from the cacophony of signals and data is greater than ever.

This is not about replacing our analysts and decision makers, it’s about making them better and far faster. DOD leaders can quickly yet prudently do this by embracing the pre-cleared, trusted AI tools in use by the IC and SOF. In so doing, they can ultimately reduce time to insight for leaders both on the ground and at the highest levels of command, to deter and, if need be, respond.

Stephen J. Townsend served as Commander, U.S. Africa Command from 2019 to 2022. He serves on the Federal Advisory Board of Primer, an AI technology company.

]]>
AP Illustration/Peter Hamlin
<![CDATA[How AI, ML and observability can transform SOCOM information dominance]]>https://www.c4isrnet.com/opinion/2024/01/18/how-ai-ml-and-observability-can-transform-socom-information-dominance/https://www.c4isrnet.com/opinion/2024/01/18/how-ai-ml-and-observability-can-transform-socom-information-dominance/Thu, 18 Jan 2024 21:45:02 +0000The U.S. military must collect data streams from thousands of battlefield vehicles, environmental sensors and other intelligent devices across every military branch and rapidly analyze data to drive split-second decision-making. Nowhere is this truer than for the Specials Operations Command.

SOCOM organizes, trains, and equips special operators across Department of Defense combatant commands. It has a workforce of 70,000 personnel and operates on air, land, and sea. Chief Digital and AI Officer, Dan Folliard, leads the command’s pathfinding data strategy, which centers on using data, analytics and AI to improve business operations and campaign globally.

SOCOM wants new industry pitches to connect autonomous tech

However, to realize this vision, a crucial foundation is required: a high-performing, reliable and secure IT environment.

The good news is that SOCOM doesn’t need to rearchitect or rebuild its sprawling Special Operations Forces Information Environment, or SIE, from the ground up. Alongside improving battlefield decision-making, artificial intelligence and machine learning can also assist IT leaders in proactively addressing digital performance issues and achieving decision superiority.

Let’s explore key strategies for optimizing SOCOM’s complex, multi-domain SIE and propelling its transition into a data-driven organization.

Complete observability

Decision makers and warfighters need uninterrupted access to data at speed and scale. But guaranteeing secure and seamless performance requires monitoring an exponentially vast number of disparate systems, edge devices, networks, and cloud and on-premises technologies across the SIE.

Instead of piecemeal approaches to infrastructure monitoring, SOCOM should consider implementing network-wide observability. Observability differs from traditional monitoring because it takes a more proactive and holistic approach to network management. Through observability, IT pros can assess the health and security of the entire SIE, including networks, applications, databases, and infrastructure, even hybrid environments.

The U.S. Special Operations Command insignia (Wikimedia Commons).

With full-stack observability, IT teams across units, commands, and business operations can break down data islands, easily understand and visualize the overall picture, and eliminate tool sprawl. They can observe device-to-device communications, properly understand data movement, and detect anomalies. Moreover, observability automates and orchestrates critical monitoring and management tasks – a necessity in vast and complex IT environments.

A key enabler of observability is Artificial Intelligence for IT Operations, or AIOps.

AIOps deploys AI and ML to efficiently synthesize and analyze large volumes of data from across multiple domains — a priority, but also a key challenge for SOCOM. This results in fast and accurate performance analysis and troubleshooting. AIOps also provides actionable intelligence that allows teams to predict potential issues, such as network or application utilization capacity, before they happen and proactively guide remediation actions.

Thanks to the continuous learning capabilities of ML, AIOps also offers deep insights into the root cause of issues and can trigger mitigation workflows, allowing teams to focus on continuous optimization of the IT environment.

Data as a resource

For observability to be successful, SOCOM must ensure its data-centric performance management approach aligns with DoD’s data governance principles – aka VAULTIS. VAULTIS stands for making data visible, accessible, understandable, linked, trustworthy, interoperable, and secure.

As SOCOM adopts observability, the following guiding principles can help:

— Uphold data governance practices to ensure security boundaries, data privacy, transparency, and controlled access.

— Ensure data fidelity, encompassing accuracy, completeness, consistency, and timely availability throughout the entire IT stack, irrespective of the underlying technologies.

— Establish relationships between workloads and views into dependencies. This helps provide complete visibility into the network resources underpinning services, thereby reducing blind spots associated with traditional network tools and enhancing the precision and speed of root cause analysis and troubleshooting.

Hybrid infrastructures

The use case for AI and ML within SOCOM are limitless, spanning the transformation of business operations to improving campaigns globally. But AI and ML, paired with other innovative technologies, also offer SOCOM the unique opportunity to elevate data to the status of a strategic asset and rethink how modern hybrid architectures are managed.

If SOCOM can learn to harness the power of data, observability and AIOps to proactively address SIE performance issues, the command can achieve a multi-domain infrastructure that is interoperable, dependable, transparent, secure, and high performing. This, in turn, frees teams to focus their efforts on developing cutting-edge applications and tools that leverage AI and ML in innovative and pioneering ways.

And it’s a repeatable blueprint (or “gold standard”) that all DoD agencies can follow.

Krishna Sai is responsible for Artificial Intelligence for IT Operations, or AIOps, service management, and database portfolios at SolarWinds, an Austin, Texas-based company that develops software for businesses to help manage their networks, systems and information technology infrastructure.

]]>
Staff Sgt. Clayton Wear
<![CDATA[Communication of intent and the importance of language to deterrence]]>https://www.c4isrnet.com/opinion/2023/12/08/communication-of-intent-and-the-importance-of-language-to-deterrence/https://www.c4isrnet.com/opinion/2023/12/08/communication-of-intent-and-the-importance-of-language-to-deterrence/Fri, 08 Dec 2023 01:32:03 +0000On Nov. 14, Deputy Pentagon Press Secretary Sabrina Singh stated that “deterrence in the Middle East is working.”

But how does one know?

It is a fair question, when less than a month after Singh’s statement the Iran-backed Houthis attacked several commercial vessels in the Red Sea, and another attempted an attack on U.S. troops by another Iranian proxy, the militias in Iraq.

Deterrence is traditionally understood as the demonstration of a capability and the communication of intent to use it.

What would happen if Congress stopped supporting allies under siege?

Although the Iranian regime and its proxies clearly continue to launch attacks on U.S. troops and assets — and at Israel — one can argue that major escalation has, so far, been largely avoided since Hamas attacked Israel on Oct. 7. Escalation, in this case, has been strategically deterred.

Part of that is no doubt because of the impressive capabilities the U.S. has deployed to the region, which include two carrier strike groups, a range of fighter jets, 300 troops, an Ohio-class submarine, as well as consistent ammunition and other support to Israel — all in just over a month.

It has only used force six times — on Oct. 26, Nov. 8, Nov. 12, Nov. 21, Nov. 22, and Dec. 3. All were described as “self-defense” strikes, and the latter two each killed several Iran-connected fighters. Each action was described as a response to a series of attacks on U.S. troops. And while National Security Advisor Jake Sullivan did acknowledge Iran as “the ultimate party responsible” for the Dec. 3 Houthi attack on commercial vessels, his response was to confer with regional allies about a task force to protect commercial vessels — hardly the rhetoric that would put an adversary on notice.

The Iranian regime has clearly made the calculation that a major regional escalation at the moment does not advance their interests. But their continued attacks on U.S. targets is not evidence of the persuasiveness of U.S. rhetoric and actions. There have been at least 76 attacks on U.S. forces over the last two months, which have caused injuries to at least 59 U.S. personnel. Iran-backed Houthis also shot down a U.S. drone.

Can more be done to deter not only escalation, but to stop these persistent attacks? The administration’s rhetoric may be critical here.

President Joe Biden’s administration has variously explained its approach as maintaining solidarity with Israel, preventing the conflict from spreading, operating in self-defense and, of course, achieving deterrence. Explicit statements regarding deterrence of Iran and its proxies have thus far not been included in those explanations.

In his first speech since the Oct. 7 attack, delivered on Oct. 10, Biden explained the rationale for the force posture changes in the region, noting it was “to strengthen deterrence.” Against whom? Biden explained further, “Let me say again — to any country, any organization, anyone thinking of taking advantage of this situation, I have one word: Don’t.”

While it seems obvious that the country in question is Iran and the organizations alluded to are its proxies, the words “any” and “anyone” could apply to Russia, China, ISIS, or any other. Is the U.S. ready to use force against these, and if so, what do these entities “taking advantage of this situation” look like?

The Nov. 8 strike on Iranian facilities was aimed at sending a message that “the United States will defend itself, its personnel, and its interests.”

The language the Biden administration used during the second strike, which came on Oct. 26, included the following disclaimer: “These narrowly tailored strikes in self-defense were intended solely to protect and defend U.S. personnel in Iraq and Syria. They are separate and distinct from the ongoing conflict between Israel and Hamas, and do not constitute a shift in our approach to the Israel-Hamas conflict.”

In that “separate and distinct” conflict, the statement continued, the administration urged “all state and non-state entities not to take action that would escalate into a broader regional conflict.”

Iran is not mentioned in the official statements regarding the deployment of either the USS Gerald R. Ford carrier strike group, deployed on Oct. 8, or the USS Dwight D. Eisenhower carrier strike group, which was deployed the following week. Both deployments were explained as measures to support Israel, and only the latter was mentioned as deploying “to deter any state or non-state actor seeking to escalate this war.”

Having focused on demonstrating capability and having used force for “self-defense,” the administration must clarify that deterrence is not only meant to prevent escalation, but to stop attacks on U.S. personnel.

This requires two steps. First, directly identify the Iranian regime as the catalyst for Oct. 7 and the regional escalation since — from Hezbollah’s attacks against Israel to the attacks on U.S. troops. America and Israel are engaged in a single regional conflict that extends far beyond Hamas in Gaza.

Next, the administration must identify to the Iranian regime the scenario that would constitute crossing a threshold — and invite a significant U.S. response — by its actions or those of its proxies.

While deterrence may be working in preventing escalation, the situation can change in an instant, as Oct. 7 demonstrated. Such an event can challenge the administration and expose miscalculation. Indeed, ambiguity often fosters tragic miscalculation.

Clarity of rhetoric can at least serve as a warning to an adversary, who, if nothing else, is certainly listening carefully.

Retired Army Lt. Gen. H. Steven Blum, former deputy commander of U.S. Northern Command and former Chief of the National Guard Bureau, served for 42 years.

Jacob Olidort serves as director of research at the Gemunder Center for Defense & Strategy at the Jewish Institute for National Security of America, or JINSA.

Have an opinion?

This article is an Op-Ed and as such, the opinions expressed are those of the author.

If you would like to submit a letter to the editor, or submit an editorial of your own, please email opinions@militarytimes.com for Military Times or our services sites. Please email opinion@defensenews.com to reach Defense News, C4ISRNet or Federal Times. Want more perspectives like this sent straight to you? Subscribe to get our Commentary & Opinion newsletter once a week.

]]>
Seaman Rylin Paul
<![CDATA[UK says Russia behind cyber espionage aimed at undermining democracy]]>https://www.c4isrnet.com/cyber/2023/12/07/uk-says-russia-behind-cyber-espionage-aimed-at-undermining-democracy/https://www.c4isrnet.com/cyber/2023/12/07/uk-says-russia-behind-cyber-espionage-aimed-at-undermining-democracy/Thu, 07 Dec 2023 14:05:05 +0000Russia’s intelligence services targeted high-profile British politicians, civil servants and journalists with cyberespionage and “malicious cyberactivity” as part of sustained attempts to interfere in U.K. political processes, Britain’s government said Thursday.

The Foreign Office said Russia’s FSB agency was responsible for a range of cyberespionage operations in the U.K., including targeting British parliamentarians from multiple parties from at least 2015 through to this year and selectively leaking and amplifying sensitive information to serve Russian interests.

Russia will use cyber to sow chaos if war starts, Pentagon says

Foreign Office minister Leo Docherty told lawmakers that a cybergroup known as “Star Blizzard” or Callisto Group, which British cybersecurity officials believe to be “almost certainly subordinate” to an FSB unit, created false identities to impersonate legitimate contacts and then delivered a malicious link to victims.

“They have been targeting high-profile individuals and entities with a clear intent –- using information they obtain to meddle in British politics,” Docherty said.

“The targeting of this group is not limited to politicians but public-facing figures and institutions of all types,” he added. “We have seen impersonation and attempts to compromise email accounts in the public sector, universities, media, NGOs and wider civil society.”

Authorities said the group was responsible for the 2018 hacking of the Institute for Statecraft, a U.K. think tank that worked on defending democracy against disinformation, and the leaking of U.S.-U.K. trade documents ahead of the 2019 British general election.

The goal was to “undermine trust in politics in the U.K. and likeminded states,” the government said.

FSB headquarters in Moscow.

The Foreign Office said that “while some attacks resulted in documents being leaked, attempts to interfere with U.K. politics and democracy have not been successful.”

It said the U.K. on Thursday imposed sanctions on Ruslan Aleksandrovich Peretyatko, a FSB intelligence officer, and Andrey Stanislavovich Korinets, a member of Star Blizzard, for involvement in the so-called spear-phishing operations.

The Russian ambassador to the U.K. was also summoned to express Britain’s concerns over the interference, the Foreign Office said.

“Russia’s attempts to interfere in U.K. politics are completely unacceptable and seek to threaten our democratic processes,” Foreign Secretary David Cameron said in a statement. “Despite their repeated efforts, they have failed.”

Asked whether British officials have uncovered the full extent of the cyber interference, Docherty said he was confident but stressed that officials will continue to be vigilant ahead of U.K. general elections next year.

]]>
Matt Dunham
<![CDATA[Failure to renew spy program would be ‘devastating,’ FBI chief says]]>https://www.c4isrnet.com/federal-oversight/congress/2023/12/05/failure-to-renew-spy-program-would-be-devastating-fbi-chief-says/https://www.c4isrnet.com/federal-oversight/congress/2023/12/05/failure-to-renew-spy-program-would-be-devastating-fbi-chief-says/Tue, 05 Dec 2023 21:38:30 +0000FBI Director Christopher Wray called Tuesday for the reauthorization of a U.S. government surveillance tool set to expire at the end of the year, warning Senate lawmakers that there would be “devastating” consequences for public safety if the program is allowed to lapse.

At issue is Section 702 of the Foreign Intelligence Surveillance Act, which allows the U.S. government to collect without a warrant the communications of targeted foreigners outside the United States. Law enforcement and intelligence officials see the program as vital to combating terror attacks, cyber intrusions, espionage and other foreign threats.

The program, created in the aftermath of the Sept. 11, 2001, attacks, is due to expire at the end of this month unless Congress votes to reauthorize it. But Republican and Democratic lawmakers alike have balked at renewing the program in its current form, recommending a slew of reforms in competing legislative proposals that are aimed at better safeguarding civil liberties and that are jockeying for support in the coming weeks.

The thorny path to reauthorization was laid bare during Tuesday’s hearing, when lawmakers from both parties questioned Wray, at times aggressively, over periodic misuse of the program by FBI employees seeking out information about Americans. Though the program enables surveillance only of foreigners located outside the U.S, it also can capture the communications of American citizens and others in the U.S. when those people are in contact with those targeted foreigners.

Sen. Mike Lee, a Utah Republican, said that during his 13 years on the committee, he’d pressed multiple FBI directors about civil liberties violations associated with the surveillance program and had repeatedly been given false reassurances about the reforms being put in place.

“Every darn one of them has told me the same thing: ‘Don’t worry about it, we’ve got this taken care of it, we’ve got new procedures, it’s going to be different now’,” Lee said. “It’s never different. You haven’t changed.”

He added: “We have no reason to trust you because you haven’t behaved in a manner that is trustworthy.”

‘Whole other level’

The fact that Wray devoted a significant portion of his prepared remarks to the Senate Judiciary Committee to the issue underscores its importance to the FBI, particularly at a time when the Israel-Hamas war has drawn heightened concern about the possibility of extremist violence on U.S. soil and contributed to threats being at a “whole other level” since the Oct. 7 attacks.

“I think blinding us through either allowing 702 to lapse or amending it in a way that guts its effectiveness would be reckless at best and dangerous and irresponsible at worst,” Wray said.

Calling the authority indispensable, he told the committee, “702 allows us to stay a step ahead of foreign actors located outside the United States who pose a threat to national security. And the expiration of our 702 authorities would be devastating to the FBI’s ability to protect Americans from those threats.”

Wray, who took over as director in 2017, said that what made the current climate unique is that “so many of the threats are all elevated at the same time.”

But the 702 program has come under scrutiny in the last year following revelations that FBI analysts improperly searched the database of intelligence, including for information about people tied to the Jan. 6, 2021, riot at the U.S. Capitol and the racial justice protests of 2020.

Sen. Ted Cruz, R-Texas, center, flanked by Sen. Josh Hawley, R-Mo., left, and Sen. Mike Lee, R-Utah, right, questions FBI Director Christopher Wray during a Senate Judiciary Committee oversight hearing on Capitol Hill in Washington, Tuesday, Dec. 5, 2023. (AP Photo/Susan Walsh)

Those concerns have united longtime vocal champions of civil liberties, including Democratic Sen. Ron Wyden of Oregon, as well as Republican supporters of former President Donald Trump who are still angry over surveillance missteps made during the Russia investigation of 2016.

Wray and the White House have balked at the idea that the FBI should be required to get a warrant before searching the intelligence database for information about Americans and others inside the The FBI director said such a requirement would be both legally unnecessary and would hold up the bureau in trying to disrupt fast-moving national security threats.

If a warrant requirement is the path chosen, Wray said, “What if there were a terrorist attack that we had a shot to prevent, but couldn’t take it, because the FBI was deprived of the ability under 702 to look at key information already sitting in our holdings?”

One bipartisan proposal from Republican Sen. Marco Rubio and Democratic Sen. Mark Warner, the two leaders of the Senate intelligence committee, would impose a more limited requirement, mandating a court order for when the FBI wants to conduct queries on Americans to find evidence of a crime but not for when the FBI is searching the intelligence database for foreign intelligence information.

Sen. Chris Coons, a Delaware Democrat who said he had not yet made up his mind on how he’d vote, asked Wray whether that more narrow proposal might be more acceptable. The FBI director said that that proposal might provide a viable path forward, but noted that searches of the database for the sole purpose of finding evidence of a crime were very rare.

Illinois Sen. Dick Durbin, the Democratic chairman of the Senate Judiciary Committee, summed up the issue by telling Wray that though “there was no question” that Section 702 was a “critical tool for collecting foreign intelligence” he would encourage significant reforms to protect the privacy of “innocent Americans.”

]]>
Susan Walsh
<![CDATA[Pentagon seeks to rapidly build up information-warfare force]]>https://www.c4isrnet.com/information-warfare/2023/11/21/pentagon-seeks-to-rapidly-build-up-information-warfare-force/https://www.c4isrnet.com/information-warfare/2023/11/21/pentagon-seeks-to-rapidly-build-up-information-warfare-force/Tue, 21 Nov 2023 18:40:43 +0000WASHINGTON — The U.S. military lacks the ability to quickly deploy personnel that can fend off malevolent actors trying to shape public opinion and must act now to build up such “information forces,” according to a newly revealed Pentagon strategy.

Conquering the information ecosystem from social-media chatter to government propaganda is increasingly important as mis- and disinformation proliferate and world powers including China and Russia try to influence foreign affairs from afar.

The Defense Department must “build a process to rapidly deploy teams of information forces, including the reserve force,” and foster a related workforce comprising military and civilian experts, according to the 2023 Strategy for Operations in the Information Environment. Improved recruiting, training and career paths are needed for the effort.

The congressionally mandated document was made public Nov. 17, months after internal publication. Information warfare represents a persuasive brew of public outreach, offensive and defensive electronic capabilities, and cyber operations; it combines data awareness and manipulation to gain an advantage before, during and after major events.

“As this strategy makes clear, our ability to gain and sustain information advantages at the times and places of our choosing are critical to successful operations in the information space,” Defense Secretary Lloyd Austin wrote in the introduction. “Make no mistake: America’s competitors and enemies are moving quickly in the information environment, hoping to offset our enduring strategic advantages elsewhere.”

China, Russia propaganda wither as cameras multiply, US admiral says

Emerging technologies play a critical role in both waging and foiling influence campaigns, according to the strategy.

Generative artificial intelligence, capable of imitating human interaction, can fuel spam, phishing attempts and impersonation, while automation can flood Facebook or X, formerly Twitter, with misleading and provoking posts. U.S. officials have repeatedly warned of the former. The latter has been seen in the Russia-Ukraine and Israel-Hamas wars.

Air Force Lt. Gen. Kevin Kennedy, the commander of the information warfare-focused 16th Air Force, at an event earlier this month said the discipline dominates international competition and shapes the fighting that can follow.

“It’s the essential enabler, as we’re looking to gain information and decision advantage,” Kennedy said at a Mitchell Institute event last week. “We’re employing forces across domains to make sure that we’re ready to seize the initiative in the information domain, and, through our information warfare capabilities, to prevail in conflict and set the conditions for peace, after the conflict, as we’re moving forward.”

]]>
Gary Nichols
<![CDATA[Pentagon AI boss seeks ‘justified confidence’ to take tech into battle]]>https://www.c4isrnet.com/artificial-intelligence/2023/11/20/pentagon-ai-boss-seeks-justified-confidence-to-take-tech-into-battle/https://www.c4isrnet.com/artificial-intelligence/2023/11/20/pentagon-ai-boss-seeks-justified-confidence-to-take-tech-into-battle/Mon, 20 Nov 2023 15:51:45 +0000The Pentagon’s chief digital and artificial intelligence officer, Craig Martell, said he is alarmed by the potential for generative artificial intelligence systems like ChatGPT to deceive and sow disinformation. His talk on the technology at the DefCon hacker convention in August was a huge hit. But he’s anything but sour on reliable AI.

Not a soldier but a data scientist, Martell headed machine-learning at companies including LinkedIn, Dropbox and Lyft before taking the job last year.

Pentagon debuts new data and AI strategy after Biden’s executive order

Marshalling the U.S. military’s data and determining what AI is trustworthy enough to take into battle is a big challenge in an increasingly unstable world where multiple countries are racing to develop lethal autonomous weapons.

The interview has been edited for length and clarity.

—-

Q: What is your main mission?

A: Our job is to scale decision advantage from the boardroom to the battlefield. I don’t see it as our job to tackle a few particular missions but rather to develop the tools, processes, infrastructure and policies that allow the department as a whole to scale.

Q: So the goal is global information dominance? What do you need to succeed?

A: We are finally getting at network-centric warfare -- how to get the right data to the right place at the right time. There is a hierarchy of needs: quality data at the bottom, analytics and metrics in the middle, AI at the top. For this to work, most important is high-quality data.

Q: How should we think about AI use in military applications?

A: All AI is, really, is counting the past to predict the future. I don’t actually think the modern wave of AI is any different.

China, Ukraine

Q: Is China winning the AI arms race?

A: I find that metaphor somewhat flawed. When we had a nuclear arms race it was with a monolithic technology. AI is not that. Nor is it a Pandora’s box. It’s a set of technologies we apply on a case-by-base basis, verifying empirically whether it’s effective or not.

Q: The U.S. military is using AI tech to assist Ukraine. How are you helping?

A: Our team is not involved with Ukraine other than to help build a database for how allies provide assistance. It’s called Skyblue. We’re just helping make sure that stays organized.

Q: There is much discussion about autonomous lethal weaponry – like attack drones. The consensus is humans will ultimately be reduced to a supervisory role — being able to abort missions but mostly not interfering. Sound right?

A: In the military we train with a technology until we develop a justified confidence. We understand the limits of a system, know when it works and when it might not. How does this map to autonomous systems? Take my car. I trust the adaptive cruise control on it. The technology that is supposed to keep it from changing lanes, on the other hand, is terrible. So I don’t have justified confidence in that system and don’t use it. Extrapolate that to the military.

‘Loyal wingman’

Q: The Air Force’s “loyal wingman” program in development would have drones fly in tandem with fighter jets flown by humans. Is the computer vision good enough to distinguish friend from foe?

A: Computer vision has made amazing strides in the past 10 years. Whether it’s useful in a particular situation is an empirical question. We need to determine the precision we are willing to accept for the use case and build against that criteria – and test. So we can’t generalize. I would really like us to stop talking about the technology as a monolith and talk instead about the capabilities we want.

Q: You are currently studying generative AI and large-language models. When might it be used in the Department of Defense?

A: The commercial large-language models are definitely not constrained to tell the truth, so I am skeptical. That said, through Task Force Lima (launched in August) we are studying more than 160 use cases. We want to decide what is low risk and safe. I’m not setting official policy here, but let’s hypothesize.

Low-risk could be something like generating first drafts in writing or computer code. In such cases, humans are going to edit, or in the case of software, compile. It could also potentially work for information retrieval — where facts can be validated to ensure they are correct.

Q: A big challenge with AI is hiring and retaining the talent needed to test and evaluate systems and label data. AI data scientists earn a lot more than what the Pentagon has traditionally paid. How big a problem is this?

A: That’s a huge can of worms. We have just created a digital talent management office and are thinking hard about how to fill a whole new set of job roles. For example, do we really need to be hiring people who are looking to stay at the Department of Defense for 20-30 years? Probably not.

But what if we can get them for three or four? What if we paid for their college and they pay us back with three or four years and then go off with that experience and get hired by Silicon Valley? We’re thinking creatively like this. Could we, for example, be part of a diversity pipeline? Recruit at HBCUs (historically Black colleges and universities)?

]]>
Colin Demarest
<![CDATA[Navigating the Pentagon’s social media rules amid the Israel-Hamas war]]>https://www.c4isrnet.com/off-duty/military-culture/2023/11/16/navigating-the-pentagons-social-media-rules-amid-the-israel-hamas-war/https://www.c4isrnet.com/off-duty/military-culture/2023/11/16/navigating-the-pentagons-social-media-rules-amid-the-israel-hamas-war/Thu, 16 Nov 2023 20:03:49 +0000It started when Russia invaded Ukraine in February 2022. Social media posts about the conflict traveled faster than journalists could fact check, as manufactured myths like the Ghost of Kyiv, a flying ace credited with downing six Russian planes, were quickly accepted as truth.

Eventually, the anonymous pilot was proven mythic, but it was nevertheless morale-boosting for those on the ground. And while this case of propaganda may have offered a valuable service to Ukrainians, there remains no shortage of false information muddying truth on social media today — notably, information pertaining to Israel’s war with Hamas.

Disinformation, misinformation, and deep fakes can be dangerous for active duty service members who use social media. Still, the Defense Department has not updated its social media policies or put out additional guidance in light of the recent conflict in the Middle East.

Currently, Pentagon policy holds that “DoD civilians and service members must remember their duty to protect DoD information and prevent the unauthorized disclosure of non-public and classified information. Additionally they should be mindful of the power of connected media and refrain from making statements that may appear to represent an official position by the DoD, their Command, or office,” according to DoD spokesperson, Navy Cmdr. Nicole Schwegman.

While some might argue having opinions about foreign wars is different than engaging in, say, political rhetoric, for U.S. service members, there is a fine line, according to William Marcellino, senior behavioral scientist at the RAND Corporation. And in the case of the Israel-Hamas conflict, there are U.S. troops involved and decisions being made that could impact them.

“It’s not simply a foreign issue, your fellow brothers and sisters in arms — especially in the Marine Corps and the Navy and other SOF elements out there — they’re in harm’s way,” Marcellino said. “They’re conducting U.S. foreign policy and defense operations there. So, if you’re trying to chime in and spread information, misinformation, disinformation, to do political advocacy work, that touches on U.S. concerns.”

What are disinformation and extremism? And why should troops be aware?

Additionally, while operational security, or OPSEC, remains a high priority, the lines about what service members can and cannot post have become more blurred of late. And it’s increasingly difficult to police the social media activity of troops, Caitlin Chin-Rothmann, a fellow with the Center for Strategic and International Studies and Strategic Technologies Program, told Military Times.

“It is generally difficult for the military to monitor and enforce social media policies across its large civilian and active duty workforce,” Chin-Rothmann said, “and this challenge will only grow in the future.”

Service members must avoid pushing a political agenda, Pentagon guidance notes, even on personal accounts. Whether those rules are being strictly enforced, however, is another matter entirely.

It has become increasingly common in recent years, for instance, for uniformed active duty service members to share political opinions and boost information about international conflicts through social media.

“Misinformation and disinformation presents numerous challenges for the DoD and troops,” Chin-Rothmann added. “Some online conspiracy theories promote extremism or violence, stoke distrust in public health policies, and otherwise seek to undermine democratic societies. Because many DoD employees and active duty service members hold access to sensitive information or other powerful assets, they may be susceptible to outsized harms.”

Current DoD policy defines political activity as one “directed toward the success or failure of a political party, candidate for partisan political office or partisan political group.”

“As we’ve seen with the Air Guardsmen revealing classified documents on Discord, if you’re a digital native, who has been on social media your whole life, that message may not ring,” Marcellino said.

]]>
Noah Berger