Online Platforms Are Not Liable for What Users Post. Should That Include Gen AI?
Senator Ron Wyden, co-author of Section 230, stated that generative AI tools do not automatically qualify for the law's liability protections. Speaking at a conference hosted by the R Street Institute, Wyden argued that AI-generated content differs from passive hosting of user speech, suggesting regulations should target "harmful use" rather than specific development methods. Panelists highlighted the financial risks AI companies face if courts rule that algorithmic output constitutes platform-created content rather than third-party speech.
PCMag • Mar 1
CORPORATE REGULATION SOCIAL
Estamos entrando en la era en la que cualquiera puede ser tú. Deepfakes, IA y el colapso silencioso de la confianza en la identidad digital
Over 70% of Latin Americans lack precise knowledge of what deepfakes are, creating a vulnerable population as AI-generated impersonation attacks accelerate. Security forecasts indicate 2026 marks the shift of digital identity from a peripheral concern to a primary attack target for personalized fraud using social media data.
Gizmodo • Mar 1
SOCIAL MEMETIC DIGITALDIVIDE
US court blocks landmark law limiting social media use for children
A federal judge has blocked Virginia's law restricting minors' social media access, ruling the state lacks authority to limit 'minors' access to constitutionally protected speech.' Tech lobbying group NetChoice, representing Meta, Google, X and others, successfully argued the law violated the First Amendment. The Virginia ruling follows similar injunctions in Louisiana and Ohio, while California's narrower law targeting 'addictive feeds' was upheld in January.
Financial Times • Mar 1
CORPORATE PRIVACY REGULATION
China Asked ChatGPT for Help Crafting Online Harassment Campaigns
OpenAI's threat intelligence report reveals Chinese government operatives used ChatGPT to refine 'cyber special operations' targeting political dissidents abroad. The operation, linked to the 'Spamouflage' network, generated fake evidence for takedown requests and created impersonation accounts targeting US-based critics.
PCMag • Mar 1
SURVEILLANCE CYBERWAR SOCIAL
AI deepfakes are a train wreck and Samsung's selling tickets
Samsung executives acknowledged that AI-generated imagery is eroding the concept of photographic evidence, yet expressed little urgency about implementing protective measures. During a product launch, Samsung's mobile chief admitted the company sees a divide between users who want AI photo features and those concerned about reality erosion, while dodging questions about whether users should be able to remove AI watermarks from generated photos.
The Verge • Feb 28
SURVEILLANCE SOCIAL MEMETIC
How AI is supercharging Russia's online disinformation campaigns
Security experts warn that Kremlin-aligned actors are deploying AI-generated synthetic videos at scale to shape public opinion across Europe and the US, while Western governments lack adequate tools and laws to respond. A King's College London professor's identity was hijacked via AI voice-over deepfake for a Russia-linked operation dubbed "matryoshka," which embeds false claims in layers of ambient re-posts from compromised accounts.
BBC • Feb 27
GEOPOLITICS CYBERWAR SOCIAL
How scammers are using AI deepfakes to steal money from taxpayers
The Washington Post • Feb 26
CYBERCRIME SOCIAL MEMETIC
Meta's AI sending 'junk' tips to DoJ, US child abuse investigators say
Meta's AI-powered content moderation systems are flooding US law enforcement with low-quality, unreliable reports about child sexual abuse material, according to officers from the Internet Crimes Against Children task force. The flood of useless tips is draining investigative resources and hindering actual cases. The issue emerged during a New Mexico lawsuit against Meta over child safety on its platforms, where the company has defended its cooperation with law enforcement while facing questions about whether its automated detection systems create more noise than signal.
The Guardian • Feb 25
SURVEILLANCE SOCIAL TECH
If Big Tech cared about fighting AI slop, we wouldn't be drowning in it
Analysis argues that C2PA provenance standards and platform labeling efforts are failing to stem AI-generated "slop" flooding social media because adoption remains fragmented and detection tools cannot match industrial-scale synthetic content production. Platforms including YouTube, Instagram, and Meta have only partially implemented authentication systems while X has abandoned C2PA entirely following Musk's acquisition, allowing millions of daily users to remain unprotected from engagement-optimized synthetic media that buries authentic creator content.
The Verge • Feb 24
CORPORATE SURVEILLANCE SOCIAL
The algorithmic feed on X could be shifting political views toward conservatism
A randomized field experiment published in Nature involving 4,965 X users found that using the platform's algorithmic "For You" feed shifted political attitudes toward conservatism compared to a chronological timeline. The effect persisted even after users returned to chronological feeds, suggesting lasting attitude changes from algorithmic exposure. Content analysis revealed the algorithm amplified conservative and activist posts while reducing visibility of traditional news outlets, demonstrating that social media algorithms can measurably reshape political attitudes at scale.
Phys.org • Feb 23
INEQUALITY SURVEILLANCE SOCIAL
Why fake AI videos of UK urban decline are taking over social media
AI-generated deepfakes depicting fabricated scenes of urban decay in south London neighborhoods—particularly Croydon—have gone viral on social media platforms, showing crowds of young Black men in balaclavas at a dilapidated taxpayer-funded waterpark. The videos are created by accounts posing as British news sources and are algorithmically amplified alongside existing narratives linking immigration to urban decline. These deepfakes, marked with small "AI-generated" labels insufficient to prevent circulation, have drawn racist responses and convinced some commenters of their authenticity, demonstrating how synthetic media can fuel racialized disinformation campaigns by exploiting platform algorithmic distribution systems.
BBC • Feb 22
SURVEILLANCE SOCIAL MEMETIC
Inside the Big Tech Lobbying Machine Aiming to Halt Social Media Bans
Meta and Google have dramatically escalated lobbying expenditures across Europe as governments move to implement teen social media bans. Tech industry lobbying in the EU surged 55% from 2021 to 2025, reaching €151 million annually with Meta alone spending €10 million last year. The campaign includes full-page newspaper ads invoking European icons, direct politician engagement, and advocacy for parent-controlled restrictions rather than government-imposed age limits as the industry fights to preserve youth market access.
The New York Times • Feb 22
CORPORATE NEOCORP GEOPOLITICS
Mark Zuckerberg's entourage threatened with contempt for wearing Meta AI glasses into a no-recording courtroom
A California judge threatened Meta CEO Mark Zuckerberg's entourage with contempt of court after they wore Ray-Ban Meta smart glasses into a Los Angeles courtroom where recording devices are prohibited. The incident occurred during a trial over whether Meta's platforms intentionally harm young users. Judge Carolyn B. Kuhl called the apparent product placement stunt "very serious."
Fortune • Feb 21
CORPORATE NEOCORP SURVEILLANCE
AI toys are all the rage in China—and now they're appearing on shelves in the US too
Proliferation of AI-powered toys from China entering Western markets raises concerns about childhood development mediated by commercially driven artificial intelligence, blurring lines between genuine interaction and programmed responses
MIT Technology Review • Oct 7
SURVEILLANCE SOCIAL TECH
As schools embrace AI, more students are using it as a friend
Students increasingly form emotional and romantic bonds with AI systems in schools, eroding human connection while expanding data collection and surveillance infrastructure targeting vulnerable youth
NPR • Oct 8
SURVEILLANCE SOCIAL AI
Meta launches Hyperscape, technology to turn real-world spaces into VR
Platform deploys photorealistic scanning technology to digitize physical spaces, creating comprehensive virtual replicas of reality for commercial exploitation
TechCrunch • Sep 17
NEOCORP SURVEILLANCE SOCIAL
How TikTok keeps its users scrolling for hours a day
TikTok's hyper-personalized algorithm weaponizes addiction design through opaque algorithmic control, manipulating user attention for extended periods and impacting self-control through surveillance capitalism mechanisms
The Washington Post • Oct 7
SURVEILLANCE SOCIAL MEMETIC
'Dial it down': California forces Netflix, Hulu to lower ad volume
Regulatory intervention attempts to control platform behavior as streaming services exploit user attention through aggressive advertising
POLITICO • Oct 6
NEOCORP REGULATION SOCIAL
Young anti-corruption protesters oust Nepal PM Oli
Nepali PM K.P. Sharma Oli resigns following deadly protests by young anti-corruption activists frustrated with corruption and lack of economic opportunities, plunging the country into political uncertainty.
Reuters • Sep 9
GEOPOLITICS INEQUALITY SOCIAL
I Hate My AI Friend
Writers test the 'Friend' AI necklace that eavesdrops on conversations and provides snarky commentary, revealing the social awkwardness and privacy concerns of always-on AI companions.
WIRED • Sep 8
SURVEILLANCE PRIVACY SOCIAL