TikTok's CEO spent 5 hours in the hot seat producing lots of ...
Video-sharing app TikTok has a public image problem, and the U.S. government has an internet literacy problem. Together, it makes for an impressive spectacle.
The House Energy and Commerce Committee spent five hours on Thursday hounding TikTok CEO Shou Zi Chew with questions about potentially harmful algorithms and data privacy. For his part, Chew seemed to only compound the firm’s reputation issues by debating the semantics of the word “spying” in reference to reports that TikTok tracked American journalists’ precise location. Members of Congress, infamously known for their limited understanding of the inner workings of the internet, also had the chance to blunder their talking points too.
“Does TikTok access the home WiFi network?” Republican representative for North Carolina Richard Hudson asked Chew, who seemed just as confused by the question as Hudson was asking it.
The ongoing spectacle of detangling TikTok from its Chinese owner, which now lies at a stalemate, has seen no meaningful progress. The Biden and Trump administrations have both threatened a ban on the app if ByteDance fails to sell off TikTok’s U.S. operations, and China said it would strongly oppose any forced sale of TikTok. The 5-hour-long hearing only exacerbated the strained relationship between the two parties, while hardly moving the needle on significant protections for U.S. citizens.
The most consequential issue left on the table after the hearing is that of national privacy legislation, or the lack thereof. As Will Oremus of the Washington Post writes, “the people most responsible for failing to safeguard Americans’ data, arguably, are American lawmakers.”
The U.S. government has long been critical of TikTok’s rise in popularity—which recently hit a milestone of 150 million U.S. users—and its connections to China’s authoritarian government. Chinese law allows the government to seek inside information from companies based there in instances when it believes there are national security issues—a scenario that the U.S. government believes poses a national security risk for its citizens. And, as my colleague David Meyer has written, U.S. law also allows the government to demand user information from service providers in certain situations.
It remains to be seen if the hearing will amount to any actionable protections for American citizens, or for the thousands of children engaging with these social platforms, but it certainly produced some popcorn-worthy entertainment.
Want to send thoughts or suggestions to Data Sheet? Drop a line here.
Data Sheet’s daily news section was written and curated by Andrea Guzman.NEWSWORTHY
Ford’s billion-dollar loss. The car company says its electric vehicle division will likely lose $3 billion this year, a cost, it says, of refounding the company. “Startups lose money as they invest in capability, develop knowledge, build volume, and gain share,” says CFO John Lawler. Even with the loss in its EV division, Ford is still expected to earn a profit with an income increase from Ford Blue and pretax income from the commercial vehicle division forecasted to double the amount from 2022.
TikTok’s ban could just be the start. Legislation aimed at companies deemed a national security risk has targeted TikTok so far. But since the proposals give the U.S. government power for higher scrutiny of apps, other platforms might also face bans, Wired reports. That could spell trouble not only for Chinese-owned apps like WeChat and CapCut but other apps that run outside of the U.S. The threat is troubling for some who say these apps are a lifeline to connect to communities in the U.S. and abroad.
Elon Musk wanted to take control of OpenAI. For years, the public explanation for Musk's departure from OpenAI—which he cofounded—was that Tesla’s development of artificial intelligence for autonomous driving created a conflict of interest. A new report in Semafor provides another storyline. According to the report, which cites anonymous sources, Musk had grown concerned that OpenAI was falling behind Google and proposed that he take control of OpenAI to get things back on track. The other founders rejected Musk's offer, causing the mercurial entrepreneur to end his involvement with OpenAI—and to withhold the $900 million of funding that remained in his promised $1 billion contribution.ON OUR FEED
“I don’t know if y'all heard it as clearly as I did, but the reason they want to discharge treated wastewater into the river is because they can’t wait 24 months for that pipe to be run and the permanent infrastructure setup.”
—Bastrop resident Chap Ambrose, commenting during a public meeting about The Boring Company’s plans to dump wastewater in the Colorado RiverIN CASE YOU MISSED IT
'There were a couple of Tweets and then this thing went down': Citigroup CEO Jane Fraser says a social media bank run on SVB is a 'complete game changer', by Prarthana Prakash
Terra founder Do Kwon reportedly arrested in Montenegro, by Marco Quiroz-Gutierrez
Two former Microsoft execs are building ‘game-changing’ drones in Ukraine to combat Russian forces, by Eleanor Pringle
Elon Musk says Jerome Powell is so bad at his job that GPT-4 would be a better Fed chair: ‘This foolish rate hike will worsen depositor flight’, by Steve Mollman
This week in the metaverse: The IRS squints at NFTs, Sony files a Web3 gaming patent, and Magic Eden launches a Bitcoin NFT marketplace, by Marco Quiroz-GutierrezBEFORE YOU GO
ChatGPT’s new plugins. No longer limited to information in its training data ending in 2021, a small set of users now have ChatGPT plugins. At first, people will have access to 11 plug-ins for external sites like Expedia, OpenTable, and OpenAI’s own plugins, which can interpret code and pull information from the internet. OpenAI boasted about the variety of new use cases like browsing product catalogs, booking flights, or ordering food while also stressing that there could be negative consequences. “At the same time, there’s a risk that plugins could increase safety challenges by taking harmful or unintended actions, increasing the capabilities of bad actors who would defraud, mislead, or abuse others,” OpenAI said in an announcement post.
This is the web version of Data Sheet, a daily newsletter on the business of tech. Sign up to get it delivered free to your inbox.