Your Hub for Tech News and Handy Online Tools.

Technology

Ghost 6 Integrates with ActivityPub, Embracing Decentralized Social Networking

Ghost 6, the latest version of the newsletter platform, now allows publishers to share content natively with the open social web via ActivityPub. This integration connects Ghost with platforms like Mastodon, Threads, Flipboard, and WordPress.
Ghost also offers Bluesky compatibility through Bridgy Fed. Key features include native analytics to track performance and engagement, various payment methods, personalized content, and adjusted pricing plans.
This release positions Ghost as a strong competitor to Substack, particularly after Substack's recent controversy.

Artificial Intelligence

OpenAI Unveils GPT-OSS: New Open-Weight AI Reasoning Models

OpenAI has launched two open-weight AI reasoning models, gpt-oss-120b and gpt-oss-20b, available on Hugging Face. These models aim to compete with other open-source AI models and are designed to be more accessible, with the smaller model capable of running on a consumer laptop.

While OpenAI had previously favored a closed-source approach, CEO Sam Altman acknowledged the company was on the wrong side of history regarding open sourcing. The release aligns with growing pressure from Chinese AI labs and encouragement from entities like the Trump Administration to promote AI aligned with democratic values.

The models excel at powering AI agents and calling tools like web search or Python code execution. They were trained using high-compute reinforcement learning, similar to OpenAI's proprietary models. However, they are text-only and cannot process images or audio.

OpenAI is releasing gpt-oss under the Apache 2.0 license, allowing enterprises to monetize the models without permission. While the models are state-of-the-art among open models, they hallucinate more than OpenAI's latest AI reasoning models, o3 and o4-mini. OpenAI delayed the release of its open models multiple times to address safety concerns, finding that gpt-oss may marginally increase biological capabilities but doesn't reach a high capability threshold for danger.

Technology Policy

Navigating the EU AI Act: A Comprehensive Overview

The European Union's Artificial Intelligence Act (EU AI Act) is designed to establish a uniform legal framework for AI across EU countries. It aims to ensure the free movement of AI-based goods and services while fostering trust and creating a level playing field. The Act adopts a risk-based approach, banning unacceptable uses, regulating high-risk applications, and setting lighter obligations for limited-risk scenarios.
The EU AI Act rollout began on August 1, 2024, with staggered compliance deadlines. As of August 2, 2025, it applies to general-purpose AI models with systemic risk (GPAI). Penalties for non-compliance can be significant, reaching up to €35 million or 7% of annual turnover for prohibited AI applications.
While some tech companies have expressed concerns about the Act potentially hindering AI development and deployment, others have engaged with the framework. The EU remains committed to its implementation timeline despite lobbying efforts for a pause.

Technology Ethics

AI Crawling Ethics: The Perplexity vs. Cloudflare Web Scraping Debate

Cloudflare accused Perplexity of stealthily scraping websites, leading to a debate about AI agent access. The core question is whether an AI agent accessing a website on behalf of a user should be treated like a bot or a human.

Cloudflare tested Perplexity by blocking its AI crawler on a new website and then asking Perplexity about the site's content, which Perplexity answered. Cloudflare CEO criticized Perplexity, comparing its behavior to North Korean hackers.

Defenders of Perplexity argue that accessing sites on behalf of users is acceptable, questioning why an LLM accessing a site for a user should be treated differently than a web browser. Perplexity defended itself, claiming the behavior was from a third-party service.

The debate highlights the broader issue of bot activity reshaping the internet, with bots now outstripping human activity. LLMs are impacting website traffic, raising questions about whether websites should block AI agents that could potentially drive business.

Technology

WhatsApp Enhances Security with New Anti-Scam Features and Account Takedowns

WhatsApp is rolling out new features to combat scams, including safety overviews for group chats and alerts for individual chats with unknown contacts. The company has also taken down over 6.8 million accounts linked to scam centers. The new group chat feature provides key information and safety tips when someone not in your contacts adds you to a group. For individual chats, WhatsApp is testing alerts to caution users when starting a conversation with someone not in their contacts. WhatsApp also collaborated with OpenAI to disrupt scam efforts originating from a scam center in Cambodia, where scammers used ChatGPT to generate initial messages.

WhatsApp advises users to verify the legitimacy of requests, question if they make sense, and confirm the identity of individuals claiming to be friends or family.

Technology Policy

EU AI Act: Balancing Innovation and Regulation in the Age of AI

The European Union's Artificial Intelligence Act (EU AI Act) is designed to establish a uniform legal framework for AI across EU countries, promoting the free movement of AI-based goods and services. It applies to both local and foreign companies involved in AI development and deployment.

The EU AI Act aims to foster trustworthy AI while protecting health, safety, fundamental rights, and environmental protection. It adopts a risk-based approach, banning unacceptable risk use cases, regulating high-risk uses, and applying lighter obligations to limited-risk scenarios.

The rollout of the EU AI Act began on August 1, 2024, with staggered compliance deadlines. As of August 2, 2025, it applies to general-purpose AI models with systemic risk (GPAI). Penalties for non-compliance can be significant, reaching up to €35 million or 7% of annual turnover for prohibited AI applications.

Some tech companies have expressed concerns about the EU AI Act, fearing it could hinder AI development and deployment in Europe. However, many companies, including Google, Amazon, and Microsoft, have signed the voluntary GPAI code of practice.

AI Education

Google's NotebookLM Expands Access to Younger Users in Growing AI Education Race

Google's NotebookLM, an AI note-taking app, is now available to Google Workspace for Education users of any age and consumers ages 13 and up. The update removes prior age restrictions, expanding access to AI-powered research tools to younger students.

Students can now use features like Audio Overviews, interactive Mind Maps, and Video Overviews to better understand class materials. Google says NotebookLM enforces stricter content policies for users under 18 and will not use their data for AI training.

The change follows OpenAI’s introduction of a study mode for ChatGPT, indicating increased competition in the AI education sector.

Artificial Intelligence

ElevenLabs Expands into AI Music Generation with Commercial Use Claims

ElevenLabs has launched a new AI model for generating music, asserting it is cleared for commercial use, marking its expansion beyond text-to-speech. The company has secured deals with Merlin Network and Kobalt Music Group to utilize independent musicians' materials for AI training.

Concerns arise regarding the training data used for AI music generation, highlighted by lawsuits against Suno and Udio by the RIAA for alleged copyright infringement. These companies are reportedly in licensing talks with major record labels.

ElevenLabs' move into music generation raises questions about the ethical implications and potential legal challenges associated with AI-generated content mimicking artists like Dr. Dre and Kendrick Lamar.

Cybersecurity

Cisco Data Breach: Hacker Steals Customer Information via Voice Phishing Attack

Cisco has disclosed a data breach that occurred on July 24th, resulting from a voice phishing (vishing) attack. A cybercriminal impersonated a trusted entity and tricked a Cisco representative into providing access to a third-party cloud-based customer relationship management (CRM) system.

The attacker then exported a subset of basic profile information belonging to Cisco.com users. The compromised data includes names, organization names, addresses, Cisco-assigned user IDs, email addresses, phone numbers, and account-related metadata such as account creation dates.

Cisco has not specified the number of affected users. This breach resembles a series of attacks targeting companies' Salesforce data, potentially impacting Cisco, a known Salesforce customer.

Artificial Intelligence

DeepMind's Genie 3: A Leap Towards General-Purpose AI Agents

Google DeepMind has unveiled Genie 3, a foundation world model designed to train general-purpose AI agents, marking a significant step towards artificial general intelligence (AGI). Genie 3 can generate interactive 3D environments at 720p resolution and 24 frames per second from simple text prompts.

The model builds on Genie 2 and Veo 3, incorporating a deep understanding of physics to maintain consistency in its simulations. It allows for promptable world events and remembers previous generations, enabling a grasp of physics akin to human understanding.

Genie 3's capabilities facilitate training agents for general-purpose tasks, a key element in achieving AGI. It has limitations, including the range of actions an agent can take and the ability to model complex interactions between multiple agents, and can only support a few minutes of continuous interaction.

Despite these limitations, Genie 3 represents a step forward in enabling agents to plan, explore, and learn through trial and error, potentially ushering in a new era for embodied agents and AI's ability to discover strategies beyond human understanding.