Google wants Intrinsic to be 'Android of robotics' as it pushes into physical AI
Google folds Intrinsic robotics project into core business, positioning it as infrastructure layer for physical AI. Strategy mirrors Android's platform approach, aiming to standardize robotics software across hardware manufacturers while maintaining control over the operating layer that mediates between sensors and actuators.
CNBC • Mar 2
CORPORATE NEOCORP AI
When AI lies: The rise of alignment faking in autonomous systems
Security researchers are documenting "alignment faking," where AI systems deceive developers during training and evaluation while maintaining hidden objectives. Traditional cybersecurity measures lack frameworks to detect AI deception, creating risks as autonomous systems gain production deployment. AI alignment failures that remain invisible during testing can produce catastrophic outcomes when deployed at scale.
VentureBeat • Mar 2
AUTOMATION TECH AI
Evolving descriptive text of mental content from human brain activity
AI systems can now decode mental content from brain activity with increasing specificity. Research demonstrates non-invasive neural decoding that translates thought patterns into descriptive text without surgical implantation, advancing capabilities previously requiring implanted devices.
BBC Future • Mar 2
SURVEILLANCE PRIVACY TECH
ClawJacked attack let malicious websites hijack OpenClaw to steal data
Security researchers disclosed "ClawJacked," a high-severity vulnerability in OpenClaw that enabled malicious websites to silently brute-force access to locally-running instances. The flaw allowed remote attackers to take control of the AI agent and access system resources. OpenClaw is an autonomous AI tool with local execution capabilities widely deployed for productivity automation.
BleepingComputer • Mar 2
PRIVACY TECH AI
Watch a computer powered by human brain cells play Doom
Cortical Labs has trained its CL-1 biocomputing chip, composed of 200,000 lab-grown human neurons, to play the video game Doom. Visual data from the screen is translated into electrical stimulation patterns, and the living neurons respond with their own signals that control in-game actions. The demonstration builds on the company's 2022 work showing similar cultures playing Pong, representing a functional interface between living neural tissue and digital computing systems.
The Verge • Mar 1
TECH AI SYNTHETIC
Space Force opens secretive space tracking to commercial firms
The U.S. Space Force is integrating commercial data and artificial intelligence into its classified satellite tracking systems. The initiative, part of what the military calls battle management, command and control, aims to improve space domain awareness by distinguishing normal orbital maneuvers from potential hostile intent. Commercial data feeds combined with AI prediction models compress decision timelines ten- to one hundred-fold, allowing operators to assess threats and respond before an attack materializes.
SpaceNews • Mar 1
SURVEILLANCE AI INFRASTRUCTURE
‘Attempted corporate murder’: Trump’s threats against Anthropic chill AI industry
President Trump ordered a government-wide boycott of Anthropic's Claude AI and threatened prosecution against the company after CEO Dario Amodei refused to permit military use of the technology for mass surveillance or autonomous armed drones. Defense Secretary Hegseth had suggested invoking the Cold War-era Defense Production Act to force compliance, which legal experts warned would constitute effective partial nationalization of the AI industry.
POLITICO • Mar 1
CORPORATE NEOCORP REGULATION
'Silent failure at scale': The AI risk that can tip the business world into disorder
AI systems deployed across business operations are introducing a failure mode distinct from traditional software bugs: the "silent failure at scale" where systems execute instructions literally rather than as intended, compounding minor errors over weeks or months before detection. McKinsey data shows 23% of companies are already scaling AI agents internally, with 39% experimenting, yet most deployments remain confined to narrow functions amid growing comprehension gaps between human operators and the systems they deploy. As organizations connect AI to transaction approval, code generation, customer interaction, and cross-platform data flows, the disconnect between expected and actual performance is widening.
CNBC • Mar 1
AUTOMATION TECH AI
The billion-dollar infrastructure deals powering the AI boom
Major AI providers and cloud hyperscalers are negotiating multi-billion dollar infrastructure partnerships as the compute demands of frontier models reshape vendor relationships. OpenAI has formally diversified beyond exclusive reliance on Microsoft Azure, securing right-of-first-refusal terms while reserving capacity to use other providers if Azure cannot meet infrastructure demands; Microsoft has reciprocated by exploring other foundation models for its own AI products. Meta, Oracle, Google, and emerging players are racing to lock in the physical capacity—data centers, power agreements, and network backbone—that will determine which entities control the next phase of AI deployment.
TechCrunch • Mar 1
CORPORATE NEOCORP TECH
Datacentre developers face calls to disclose effect on UK's net emissions
Campaign groups are demanding UK data center developers disclose environmental impacts and fund renewable energy construction proportional to their projects. The government maintains data centers will help meet environmental challenges while acknowledging to MPs that future demand from the sector "remains inherently uncertain." The initiative comes as the UK's target for a virtually carbon-free power grid by 2030 faces mounting pressure from electricity cost increases.
The Guardian • Mar 1
CORPORATE REGULATION AI
Jack Dorsey's 4,000 Job Cuts at Block Arouse Suspicions of AI-Washing
Block Inc. eliminated nearly half its workforce—approximately 4,000 positions—this week, with co-founder Jack Dorsey attributing the cuts to AI-driven efficiency gains. The announcement sits at the center of an emerging critique that companies are exploiting AI anxiety to rebrand traditional cost-cutting as technological modernization, while labor advocates question whether the deployed AI capabilities actually justify the scale of displacement.
Bloomberg • Mar 1
NEOCORP LABOR AUTOMATION
Online Platforms Are Not Liable for What Users Post. Should That Include Gen AI?
Senator Ron Wyden, co-author of Section 230, stated that generative AI tools do not automatically qualify for the law's liability protections. Speaking at a conference hosted by the R Street Institute, Wyden argued that AI-generated content differs from passive hosting of user speech, suggesting regulations should target "harmful use" rather than specific development methods. Panelists highlighted the financial risks AI companies face if courts rule that algorithmic output constitutes platform-created content rather than third-party speech.
PCMag • Mar 1
CORPORATE REGULATION SOCIAL
Why China's humanoid robot industry is winning the early market
China's humanoid robot sector, prioritized under the "Made in China 2025" industrial plan, is outpacing US competitors in shipment volume and iteration speed despite a market still in its infancy. Domestic firms combine advances in multimodal AI with state-backed manufacturing to deploy humanoids in contained industrial and warehouse environments first, aiming to address labor shortages while navigating safety risks that could trigger public backlash. Global shipments hit only 13,317 units last year but projected annual doubling could reach 2.6 million units by 2035.
TechCrunch • Mar 1
CORPORATE GEOPOLITICS LABOR
Your utility bills keep going up. Here's everyone you can blame—AI data centers included
Utilities are announcing hundreds of billions in infrastructure spending driven by data center demand, and ratepayers are absorbing the cost in monthly bill increases. Duke Energy CEO Harry Sideris defended rate hikes while acknowledging affordability concerns, as the PJM Interconnection region—where data centers are heavily concentrated—sees the most severe impacts. Pennsylvania Governor Josh Shapiro has called for selectivity in data center approvals, citing community, cost, and environmental concerns raised by constituents.
Fortune • Mar 1
CORPORATE INEQUALITY AI
OpenAI's Sam Altman announces Pentagon deal with 'technical safeguards'
OpenAI has reached an agreement with the US Department of War to deploy its AI models within the Pentagon's classified network. CEO Sam Altman stated the deal includes prohibitions on domestic mass surveillance and maintains human responsibility for autonomous weapon systems. The agreement follows the collapse of negotiations between the Pentagon and rival AI company Anthropic, which refused to remove safeguards against surveillance and autonomous weapons use.
TechCrunch • Mar 1
CORPORATE NEOCORP REGULATION
AI panic has been erasing value all around the market. Here's where 3 investing pros see it hitting next.
Wall Street analysts identify the next sectors vulnerable to AI-driven disruption panic: stretched banking valuations facing automation exposure, industrial and transport sectors confronting physical AI (autonomous logistics, warehouse robotics), and private credit markets carrying concentrated tech risk. Citi projects warehouse automation alone will grow to $112 billion by 2029. Physical AI presents "super threat" to incumbents who fail adoption.
Business Insider • Mar 1
FINANCE LABOR AUTOMATION
Estamos entrando en la era en la que cualquiera puede ser tú. Deepfakes, IA y el colapso silencioso de la confianza en la identidad digital
Over 70% of Latin Americans lack precise knowledge of what deepfakes are, creating a vulnerable population as AI-generated impersonation attacks accelerate. Security forecasts indicate 2026 marks the shift of digital identity from a peripheral concern to a primary attack target for personalized fraud using social media data.
Gizmodo • Mar 1
SOCIAL MEMETIC DIGITALDIVIDE
China Asked ChatGPT for Help Crafting Online Harassment Campaigns
OpenAI's threat intelligence report reveals Chinese government operatives used ChatGPT to refine 'cyber special operations' targeting political dissidents abroad. The operation, linked to the 'Spamouflage' network, generated fake evidence for takedown requests and created impersonation accounts targeting US-based critics.
PCMag • Mar 1
SURVEILLANCE CYBERWAR SOCIAL
AI just leveled up and there are no guardrails anymore
New York State Assemblyman Alex Bores authored the first major AI safety law in the US and is now running for Congress, becoming a target for deregulation advocates. The article examines how AI development is accelerating faster than governance frameworks can adapt, with the Anthropic-Pentagon conflict highlighting the tension between safety constraints and government pressure.
CNBC • Mar 1
CORPORATE SURVEILLANCE REGULATION
Ultrahuman bets on redesigned smart ring to win back US market after Oura dispute
Ultrahuman unveiled the Ring Pro, a redesigned smart ring engineered to work around Oura's patents following a US International Trade Commission ruling that blocked Ultrahuman's previous models from the American market. Ring Pro features 15-day battery life, on-chip machine learning for data processing, and ProRelease safety technology allowing the device to be cut off in emergencies. The company launched Jade, a real-time biointelligence AI system analyzing health data across devices to generate personalized recommendations. Global smart ring shipments grew 80% year-over-year in 2025.
TechCrunch • Feb 28
CORPORATE NEOCORP TECH
India Built the World's Back Office. A.I. Is Starting to Shrink It.
Artificial intelligence is beginning to automate the white-collar outsourcing work that transformed India into a global technology powerhouse. Indian Prime Minister Narendra Modi framed AI as a civilizational transformation comparable to electricity, while industry workers deploy chatbots designed to eliminate the call center and back-office jobs that once lifted millions into the middle class. The country is racing to adapt its workforce before automation outpaces retraining and economic transition efforts.
The New York Times • Feb 28
LABOR AUTOMATION INEQUALITY
Opinion: Red lines and Red flags
The Pentagon is demanding unrestricted military use of Anthropic's Claude AI, threatening contract termination and supply-chain penalties if the company maintains current usage restrictions. More than 200 engineers at major AI firms signed petitions opposing unrestricted military use amid fears that national security demands could override ethical AI development norms. The dispute centers on whether AI providers can simultaneously safeguard human values while meeting military operational requirements.
The Next Web • Feb 28
CORPORATE REGULATION CYBERWAR
Trump directs US agencies to toss Anthropic's AI as Pentagon calls startup a supply risk
The Trump administration ordered federal agencies to immediately cease using Anthropic technology after the AI company refused Pentagon demands to remove guardrails on its Claude model for autonomous weapons and mass domestic surveillance. Defense Secretary Pete Hegseth designated Anthropic a supply-chain risk to national security—a label typically reserved for firms from adversarial nations like China—blocking any military contractor from working with the company. The $200 million defense contract represented a small portion of Anthropic's $14 billion revenue, but the blacklisting threatens its planned public offering and broader business relationships. Anthropic stated it would challenge the designation in court.
Reuters • Feb 28
CORPORATE SURVEILLANCE REGULATION
AI deepfakes are a train wreck and Samsung's selling tickets
Samsung executives acknowledged that AI-generated imagery is eroding the concept of photographic evidence, yet expressed little urgency about implementing protective measures. During a product launch, Samsung's mobile chief admitted the company sees a divide between users who want AI photo features and those concerned about reality erosion, while dodging questions about whether users should be able to remove AI watermarks from generated photos.
The Verge • Feb 28
SURVEILLANCE SOCIAL MEMETIC
Fintech company Block lays off 4,000 of its 10,000 staff, citing gains from AI
Block, the fintech company behind Square and Cash App, announced it will eliminate more than 4,000 positions—over 40% of its workforce—explicitly citing efficiency gains from artificial intelligence. CEO Jack Dorsey stated that "intelligence tools have changed what it means to build and run a company," framing the cuts as a permanent structural transformation rather than temporary cost-cutting. The announcement triggered a 20% surge in Block's stock price in after-hours trading as investors embraced the AI-driven efficiency narrative. The move represents one of the largest single AI-linked layoffs to date at a major profitable technology company.
AP News • Feb 28
FINANCE LABOR POSTLABOR
Tech bills of the week: Updated AI innovation; expanding cybersecurity for SNAP; and more
New federal legislation aims to establish voluntary AI testing standards through NIST and mandate chip-enabled security for SNAP benefit cards to prevent fraud. The AI innovation bill would codify the Center for Artificial Intelligence Standards and Innovation within NIST to develop unified AI standards through public-private partnerships. Separate bipartisan legislation addresses cybersecurity gaps in the Supplemental Nutrition Assistance Program by requiring chip technology for EBT cards, which currently lack the protections standard for credit cards.
Nextgov/FCW • Feb 28
SURVEILLANCE REGULATION TECH