In Thai Election, New ‘War Room’ Polices Social Media

In Thailand’s election “war room,” authorities scroll through thousands of social media posts, looking for violations of laws restricting political parties’ campaigning on social media that activists say are among the most prohibitive in the world.

The monitors are on the look-out for posts that “spread lies, slander candidates, or use rude language,” all violations of the new electoral law, said Sawang Boonmee, deputy secretary-general of the Election Commission, who gave a Reuters team an exclusive tour of the facility.

When they find an offending post, on, for example, Facebook, they print it out, date-stamp it, and file it in a clear plastic folder, to be handed over to the Election Commission and submitted to Facebook for removal.

“When we order content to be removed, we’ll reach out to the platforms, and they are happy to cooperate with us and make these orders efficient,” Sawang said.

Sawang said the tough electoral laws governing social media for the March 24 election, the first since a 2014 military coup, are a necessary innovation aimed at preventing manipulation that has plagued other countries’ elections in recent years.

“Other countries don’t do this. Thailand is ahead of the curve with regulating social media to ensure orderly campaigning and to protect candidates,” he said.

A Facebook representative said it reviewed requests from governments on a case-by-case basis.

“We have a government request process, which is no different in Thailand than the rest of the world,” the representative said.

Twitter did not respond to a request for comment.

Democracy advocates, worry the social media restrictions laid out by the military government may be impeding parties from freely campaigning.

The rules require that candidates and parties register social media handles and submit a post to the commission, stating what platform it will appear on and for how long.

Parties and candidates are only allowed to discuss policies, and posts that are judged to be misleading voters or that portray others negatively could see the party disqualified, or a candidate jailed for up to 10 years and banned from politics for 20.

Pongsak Chan-on, coordinator of the Bangkok-based Asia Network for Free and Fair Election (ANFREL), said the rules go far beyond combating “fake news” and raise questions about how free and fair the election will be.

“The rules are stricter than in any recent elections anywhere. They’re so detailed and strict that parties are obstructed,” he told Reuters.

‘Doesn’t Bode Well for Democracy’

The monitoring center, with a signboard reading “E-War Room,” has three rows of computers and stacks of printouts, with half a dozen workers spending eight hours a day searching for violations of the law.

Sawang said another intelligence center scanned for violations 24 hours a day but it was “off-limits” to media.

The election is broadly seen as a race between the military-backed prime minister, Prayuth Chan-ocha, and parties that want the military out of politics.

But the stringent rules have left anti-junta parties fretting about how to campaign online, nervous that they could inadvertently break a rule that triggers disqualification.

Up to now, the new rules have not been used to disqualify any candidates though the very threat has had a dampening effect and encouraged self-censorship.

“They create complications for parties,” said Pannika Wanich, spokeswoman for the new Future Forward Party, which has attracted support among young urban folk who have come of age on social media.

She said her party had to consult a legal team before making posts.

Some candidates have deactivated their Facebook pages while others have removed posts that might cause trouble.

Last month, Future Forward leader Thanathorn Juangroonruangkit faced disqualification over an allegation that he misled voters in his biography on the party’s website. The commission dismissed the case last week.

In another petition, the commission was asked to ban the party’s secretary-general for slandering the junta in a Facebook post.

“It’s very restrictive and doesn’t bode well for democracy,” said Tom Villarin, a Philippine congressman and member of ASEAN Parliamentarians for Human Rights (APHR). “Putting more restrictions on social media during a campaign season defeats the purpose of holding elections in the first place.”

Fighting Fake News

About 74 percent of Thailand’s population of 69 million are active social media users, putting Thais among the world’s top 10 users, according to a 2018 survey by Hootsuite and We Are Social.

Thailand is Facebook’s eighth biggest market with 51 million users, the survey showed.

Facebook said it has teams with Thai-language speakers to monitor posts and restricts electoral advertisements from outside the country.

“Combating false news is crucial to the integrity and safety of the Thailand elections,” said Katie Harbath, Facebook’s Global Politics and Government director, during a Bangkok visit in January.

Sawang said the election commission has also gained cooperation from Twitter and Japanese messaging app Line, used by 45 million Thais.

Line Thailand told Reuters it did not monitor chats for the election commission but helped limit fake news by showing only articles from “trusted publishers” on its news feature.

Hosting

more

Tech Consortium Flags More Than 800 Versions of New Zealand Attack Video

A consortium of global technology firms has shared on its collective database the digital fingerprints of more than 800 versions of the video of New Zealand’s mass shootings that killed 50 people, it said on Monday.

While it was not the first internet broadcast of a violent crime, the livestream of the massacre showed that stopping gory footage from spreading online persists as a major challenge for tech companies despite years of investment.

Last Friday, social media users intent on sharing the mosque shooting video were said to have used several methods to create a new version with a digital fingerprint different from the original, so as to evade companies’ detection systems.

“This incident highlights the importance of industry cooperation regarding the range of terrorists and violent extremists operating online,” the grouping, which includes Facebook Inc, Alphabet Inc’s Google and Twitter Inc, said of the attack.

Facebook, the world’s largest social media network with about 2.3 billion monthly users around the world, said the original video on its service, a live broadcast of a gunman firing in and around a mosque, was seen fewer than 200 times.

An archived copy drew about 3,800 additional views on Facebook before the company removed it, Facebook said in a blog post on Monday, but a user on online forum 8chan had already copied the video and posted a link on a file-sharing service.

“Before we were alerted to the video, a user on 8chan posted a link to a copy of the video on a file-sharing site,” it added.

No users had filed complaints with Facebook about offensive content in the livestream, it said, adding that its first user complaint came over the archived copy, 12 minutes after the 17-minute broadcast ended.

Administrators for 8chan did not immediately respond to an email seeking comment, but have said previously they are cooperating with law enforcement.

The gunman initiated the livestream using an app designed for extreme sports enthusiasts, with copies still being shared on social media hours later.

Late on Saturday Facebook said it had removed 1.5 million videos within 24 hours after the Christchurch attack.

New Zealand Prime Minister Jacinda Ardern has said she wants to discuss live streaming with Facebook, and some of the country’s firms are considering whether to pull advertising from social media.

The Global Internet Forum to Counter Terrorism (GIFCT) was created in 2017 under pressure from governments in Europe and the United States after a spate of deadly attacks.

It shares technical solutions for the removal of terrorist content, commissions research to assist its efforts to fight such content and works more with counter-terrorism experts.

Hosting

more

S. Korea Alert System Warns ‘Smartphone Zombies’ of Traffic

A city in South Korea, which has the world’s highest smartphone penetration rate, has installed flickering lights and laser beams at a road crossing to warn “smartphone zombies” to look up and drivers to slow down, in the hope of preventing accidents.

The designers of the system were prompted by growing worry that more pedestrians glued to their phones will become casualties in a country that already has some of the highest road fatality and injury rates among developed countries.

State-run Korea Institute of Civil Engineering and Building Technology (KICT) believes its system of flickering lights at zebra crossings can warn both pedestrians and drivers.

In addition to red, yellow and blue LED lights on the pavement, “smombies” – smartphone zombies – will be warned by laser beam projected from power poles and an alert sent to the phones by an app that they are about to step into traffic.

“Increasing number of smombie accidents have occurred in pedestrian crossings, so these zombie lights are essential to prevent these pedestrian accidents,” said KICT senior researcher Kim Jong-hoon.

The multi-dimensional warning system is operated by radar sensors and thermal cameras and comes with a price tag of 15 million won ($13,250) per crossing.

Drivers are alerted by the flashing lights, which have shown to be effective 83.4 percent of the time in the institute’s tests involving about 1,000 vehicles.

In 2017, more than 1,600 pedestrians were killed in auto related accidents, which is about 40 percent of total traffic fatalities, according to data from the Traffic Accident Analysis System.

South Korea has the world’s highest smartphone penetration rate, according to Pew Research Center, with about 94 percent of adults owning the devices in 2017, compared with 77 percent in the United States and 59 percent in Japan.

For now, the smombie warning system is installed only in Ilsan, a suburban city about 30 km northwest of the capital, Seoul, but is expected to go nationwide, according to the institute.

Kim Dan-hee, a 23-year-old resident of Ilsan, welcomed the system, saying she was often too engrossed in her phone to remember to look at traffic.

“This flickering light makes me feel safe as it makes me look around again, and I hope that we can have more of these in town,” she said.

Hosting

more

Facebook Says Service Hindered by Lack of Local News

Facebook’s effort to establish a service that provides its users with local news and information is being hindered by the lack of outlets where the company’s technicians can find original reporting.

The service, launched last year, is currently available in some 400 cities in the United States. But the social media giant said it has found that 40 percent of Americans live in places where there weren’t enough local news stories to support it.

Facebook announced Monday it would share its research with academics at Duke, Harvard, Minnesota and North Carolina who are studying the extent of news deserts created by newspaper closures and staff downsizing .

Some 1,800 newspapers have closed in the United States over the last 15 years, according to the University of North Carolina. Newsroom employment has declined by 45 percent as the industry struggles with a broken business model partly caused by the success of companies on the Internet, including Facebook.

The Facebook service, called ”Today In ,” collects news stories from various local outlets, along with government and community groups. The company deems a community unsuitable for “Today In” if it cannot find a single day in a month with at least five news items available to share.

There’s not a wide geographical disparity. For example, the percentage of news deserts is higher in the Northeast and Midwest, at 43 percent, Facebook said. In the South and West, the figure is 38 percent.

“It affirms the fact that we have a real lack of original local reporting,” said Penelope Muse Abernathy, a University of North Carolina professor who studies the topic. She said she hopes the data helps pinpoint areas where the need is greatest, eventually leading to some ideas for solutions.

Facebook doesn’t necessarily have the answers. “Everyone can learn from working together,” said Anne Kornblut, director of news initiatives at the company.

The company plans to award some 100 grants, ranging from $5,000 to $25,000, to people with ideas for making more news available, said Josh Mabry, head of local news partnerships for Facebook.

That comes on top of $300 million in grants Facebook announced in January to help programs and partnerships designed to boost local news.

The company doesn’t plan to launch newsgathering efforts of its own, Kornblut said.

“Our history has been — and we will probably stick to it — to let journalists do what they do well and let us support them and let them do their work,” she said.

Hosting

more

Facebook Still Working to Remove All Videos of New Zealand Terrorist Attack

Facebook is continuing to work to remove all video of the mass shooting in New Zealand which the perpetrator livestreamed Friday, the company said Sunday.

“We will continue working directly with New Zealand Police as their response and investigation continues,” Mia Garlick of Facebook New Zealand said in a statement Sunday.

Garlick said that the company is currently working to remove even edited versions of the original video which do not contain graphic content, “Out of respect for the people affected by this tragedy and the concerns of local authorities.”

In the 24 hours following the mass shooting, which left 50 people dead, Facebook removed 1.5 million videos of the attack, of which 1.2 million were blocked at upload, the company said.

Facebook’s most recent comments follow criticism of the platform after the shooter not only livestreamed the 17 graphic minutes of his rampage, using a camera mounted on his helmet, but also had posted a 74-page white supremacist manifesto on Facebook.

Earlier Sunday, New Zealand’s Prime Minister Jacinda Ardern told a news conference that there were “further questions to be answered” by Facebook and other social media platforms.

“We did as much as we could to remove or seek to have removed some of the footage that was being circulated in the aftermath of this terrorist attack. Ultimately, though, it has been up to those platforms to facilitate their removal and support their removal,” she said.

The attack came during Friday prayers when the Al Noor Mosque and the nearby Linwood Mosque were filled with hundreds of worshippers. The victims of Friday’s shooting included immigrants from Jordan, Saudi Arabia, Turkey, Indonesia and Malaysia.

Hosting

more

Social Media Scramble to Remove New Zealand Suspect’s Video

They built their services for sharing, allowing users to reach others around the world. Now they want people to hold back.  

 

Facebook and other social media companies battled their own services on Friday as they tried to delete copies of a video apparently recorded by the gunman as he killed 49 people and wounded scores of others in the attack on two New Zealand mosques Friday.  

 

The video was livestreamed on the suspect’s Facebook account and later reposted on other services.  

 

According to news reports, Facebook took down the livestream of the attack 20 minutes after it was posted and removed the suspect’s accounts. But people were able to capture the video and repost it on other sites, including YouTube, Twitter and Reddit.  

 

YouTube has tweeted that it is “working to remove any violent footage.” A post from one user on Reddit asks others not to “post the videos. If you see the videos, bring it to the moderators’ attention.” 

 

Criticism of pace

 

Despite the companies’ quick actions, they still came under fire for not being fast enough. Critics said the platforms should have better systems in place to locate and remove content, instead of a system that helps others facilitate its spread once something is posted. 

 

One critic, Tom Watson, a member of the British Parliament and deputy leader of the Labor Party, called for YouTube to stop all new videos from being posted on the site if it could not stop the spread of the New Zealand video.  

Resistance to censorship

The companies’ race to stamp out the New Zealand video highlighted the dilemma that social media companies have faced, particularly as they have allowed livestreaming.  

 

Built on users’ content, Facebook, YouTube and others have long resisted the arduous task of censoring objectionable content.   

 

At hearings in Washington or in media interviews, executives of these firms have said that untrue information is in itself not against their terms of service.

Instead of removing information deemed fake or objectionable, social media companies have tried to frame the information with fact checking or have demoted the information on their sites, making it harder for people to find.

That is what Facebook appears to be doing with the anti-vaccination content on its site. Earlier this month, Facebook said it would curtail anti-vaccination information on its platforms, including blocking advertising that contains false information about vaccines. It did not say it would remove users expressing anti-vaccination content.

But sometimes the firms do remove accounts. Last year, Facebook, Twitter and others removed from their platforms Alex Jones, an American commentator, used for spreading conspiracy theories and stirring hatred.  

 

More monitors

 

In the past year, some social media companies have hired more people to monitor content so that issues are flagged faster, rather than having to wait for other users or the firm’s algorithms to flag objectionable content.

With the New Zealand shooting video, Facebook and other firms appeared to be in lockstep, saying they would remove the content as quickly as they found it.  

 

But there have been more calls for human and technical solutions that can quickly stop the spread of content across the internet. 

Hosting

more

Facebook Product Chief Cox to Leave in Latest Executive Exit

Facebook Inc said on Thursday Chief Product Officer Chris Cox will leave the social media network after 13 years, adding to a recent string of high-profile exits.

Also departing is WhatsApp Vice President Chris Daniels, Chief Executive Officer Mark Zuckerberg said in a blog post. The company does not immediately plan to appoint anyone to fill Cox’s role in the near term, he said.

Cox, among the first Facebook hires, gained oversight of WhatsApp and Instagram following the exits of their founders. In September, Instagram co-founders Kevin Systrom and Mike Krieger resigned as chief executive officer and chief technical officer of the photo-sharing app owned by Facebook.

Jan Koum, the co-founder of WhatsApp, left in April last year.

“As Mark has outlined, we are turning a new page in our product direction, focused on an encrypted, interoperable, messaging network. …This will be a big project and we will need leaders who are excited to see the new direction through,” Cox said in a Facebook post.

Will Cathcart, vice president of product management, will now lead WhatsApp and Head of Video, Games and Monetization Fidji Simo will be the new head of the Facebook app, Zuckerberg said.

Hosting

more