Ramsay Research Agent — April 4, 2026
Top 5 Stories Today
1. MCP Security Debt Hits Critical Mass: 30+ CVEs in 60 Days, PraisonAI CVSS 10, Azure AI Foundry CVSS 10
Thirty CVEs in sixty days. That's the MCP ecosystem's security track record for 2026 so far, and the severity is climbing.
Three disclosures dropped this week that should make anyone running agent infrastructure pause. First, PraisonAI, a popular multi-agent orchestration framework, got hit with five CVEs at once. The worst is CVE-2026-34938, a CVSS 10 sandbox bypass that chains to remote code execution on any version before 1.5.90. All three sandbox layers fail. The other four cover SQL injection via f-string thread IDs (CVE-2026-34934, CVSS 9.8), CLI command injection through the --mcp argument (CVE-2026-34935, CVSS 9.8), unauthenticated WebSocket agent control (CVE-2026-34952, CVSS 9.1), and a SubprocessSandbox escape via missing sh/bash blocklist (CVE-2026-34955, CVSS 8.8). This mirrors the CrewAI CVE cluster from last week. The pattern is clear: multi-agent framework sandboxing is systematically broken across the ecosystem.
Second, Azure AI Foundry got a CVSS 10 privilege escalation (CVE-2026-32213). No prior authentication required. Any network attacker can escalate to full admin over Azure AI Foundry resources. Microsoft says a fix is available through MSRC, but if you're running AI workloads on Azure AI Foundry and haven't patched, stop reading this and go patch.
Third, CVE-2026-32211 hit the Azure MCP Server itself at CVSS 9.1, plus CVE-2026-5322 for SQL injection in mcp-data-vis.
I've been saying this for weeks: MCP servers aren't development toys. They're network-exposed services. The security posture most teams apply to them, which is basically none, made sense when MCP was a local dev protocol. That phase is over. The vulnerability pattern has shifted from client-side tool poisoning and prompt injection to server-side authentication failures in production cloud services. That's a different threat class entirely.
What to do right now: audit every MCP server in your stack for authentication. Patch PraisonAI to 1.5.90+. Verify Azure AI Foundry patches. Treat your MCP endpoints like you'd treat a public API endpoint, because that's what they are.
2. Seat-Based Pricing Dies: Salesforce AWUs, SAP Consumption Billing, ServiceNow Credits, All in One Quarter
Salesforce did it. Then SAP did it. Now ServiceNow. Something's happening.
The three largest enterprise software companies by market cap all confirmed pricing transitions away from per-seat licensing in the same quarter. This isn't incremental. This is the death certificate for the unit of economics that defined SaaS for twenty years.
Salesforce hit $800M ARR on Agentforce with 8,000 deals and 2.4 billion Agentic Work Units (AWUs) completed. AWU is now the primary billing metric, with one AWU equaling one discrete AI task: a processed prompt, a completed reasoning chain, or a tool invocation. Half of Q4 bookings came through Flex Credits.
SAP's CEO said it plain: "AI is so powerful and will automate numerous tasks, so there is no reason to stick with subscription-based billing." They're shifting bundled services into discrete consumption catalogs. New forward-deployed engineering teams start in July to help customers build on SAP's AI stack, borrowing straight from Palantir's playbook.
ServiceNow moved to Pro Plus credits. Same direction, different wrapper.
When all three shift simultaneously, the "seat" as enterprise software's fundamental unit is done. AlixPartners predicts hybrid pricing models will make up 40% of software revenue by end of 2026. They also warned that mid-market enterprise software companies are caught in a squeeze: AI-native entrants replicating apps at a fraction of the cost on one side, tech giants spending billions on the other. Many won't survive the next 24 months.
If you're pricing a product right now, you need to understand what replaces seats. The answer is outcomes. AWUs, credits, resolved conversations, generated artifacts. The unit of value is shifting from "access to a tool" to "work completed by an agent." Every founder and product manager should be modeling what their per-outcome pricing looks like.
3. Cloudflare Dynamic Workers: V8 Isolate Sandboxing for AI Agent Code Execution, 100x Faster Than Containers
If you're running any kind of agent code execution in production, this changes your infrastructure math.
Cloudflare released Dynamic Worker Loader into open beta. The core idea: Workers can now spin up other Workers at runtime, specifically designed for executing AI-generated code. V8 isolates start in milliseconds and use megabytes of memory. Containers start in seconds and use hundreds of megabytes. That's roughly 100x faster cold start and 100x better memory efficiency.
The numbers that matter for builders: $0.002 per unique Worker per day (waived during beta), available to all paid Workers users right now. But the real story is Code Mode. Instead of multi-step LLM calls where the model invokes tools one at a time over multiple round trips, Code Mode lets the agent write a code block that runs tool-calling logic as actual code. Cloudflare says this saves up to 80% in inference tokens. I haven't verified that number myself, but even if it's half that, the economics are significant for anyone paying per-token on agent workloads.
I've been building agent systems on containers for the past year. The cold start tax is real. Every time an agent needs to execute generated code, you're waiting for a container to spin up, run the code, return the result. For simple tool calls, that overhead dominates the actual compute. V8 isolates eliminate most of it.
The security model matters too. Each isolate runs in its own sandbox with no shared memory, no filesystem access, and process-level isolation. For AI-generated code that you don't fully trust (which should be all AI-generated code), this is the right abstraction layer. You get execution without giving the code access to anything it shouldn't touch.
This slots into a bigger pattern I keep tracking: the infrastructure layer is catching up to the agent layer. We've had agent frameworks for over a year, but the execution environments were borrowed from the container era. Cloudflare is building execution infrastructure specifically for agents. That's the right move.
4. AI Vulnerability Discovery Creates a Maintainer Capacity Crisis: Linux Reports Jump 5x, Ptacek Says "Vuln Research Is Cooked"
Two senior Linux kernel maintainers independently confirmed something uncomfortable this week. Willy Tarreau reported that security vulnerability reports jumped from 2-3 per week to roughly 10 per week over the past year. Greg Kroah-Hartman confirmed the trend and added a detail that should worry everyone: months ago, the AI-generated reports were "funny" and obviously wrong. Then something changed about a month ago. The reports are now high-quality and accurate. They're overwhelming maintainer bandwidth.
Separately, security veteran Thomas Ptacek published an essay arguing that frontier coding agents will "drastically alter both the practice and economics of exploit development." His thesis: pointing an agent at a source tree and typing "find me zero days" will produce substantial amounts of high-impact vulnerability research, because LLMs encode enough correlation across vast code bodies that the implicit search problems of vuln research play to their core strengths.
Then there's Hexstrike-AI, disclosed by Check Point Research. An offensive framework that lets AI models autonomously run 150+ cybersecurity tools for penetration testing and vulnerability discovery. Threat actors claim it reduces exploitation time from days to under 10 minutes. From finding to weaponization in the time it takes to make coffee.
And the UK NCSC published data showing Claude Opus 4.6 completed roughly half of a 32-step enterprise network simulation for about £65 per attempt. Best AI models improved offensive capability 6x in 18 months.
Here's the problem nobody's solving: who reviews the AI's homework? Finding vulnerabilities is getting automated. Fixing them still requires human maintainers. The Linux kernel has a handful of people reviewing security patches for the most critical piece of open-source software on the planet, and they're already drowning. This isn't a Linux problem. Any popular open-source project used as context by coding agents will face the same discovery flood. The bottleneck has shifted from finding bugs to triaging fixes, and I don't see a good answer yet.
5. Cursor 3 Launches Agent-First Unified Workspace: The IDE Category Is Fragmenting Into Something Else
Anysphere shipped Cursor 3 (codenamed "Glass") on April 2, and it's not really an IDE anymore. It's an agent workspace.
The redesign is fundamental. Developers enter natural language task descriptions, select an LLM, and Cursor generates code plus a demo video. Local and cloud agents run in parallel and appear in a unified sidebar. The part that caught my attention: you can access it from mobile, web, desktop, Slack, GitHub, and Linear. The IDE broke out of the desktop.
This is a direct response to Claude Code and OpenAI Codex eating into Cursor's developer base. Both of those are terminal-first, agent-first tools that don't need a traditional editor. Cursor's answer is to abandon the traditional editor paradigm entirely and meet them on agent territory. Smart move, but risky. They're betting the entire company on this direction while facing pressure from subsidized competitor pricing.
I use Claude Code daily. It's terminal-based, keyboard-driven, and fits my workflow. Cursor 3 is betting that most developers want the opposite: a visual workspace where you describe what you want and agents figure out the how. I'm not sure which bet wins long-term, but I think they're both right for different types of work. Complex refactoring across a large codebase? I want Claude Code in my terminal. Prototyping a new feature with quick iteration? A visual agent workspace might be faster.
The bigger signal here is category fragmentation. "IDE" used to mean one thing. Now we have terminal agents (Claude Code), agent workspaces (Cursor 3), browser-as-IDE tools (Stagewise, YC S25, which turns the browser into a development environment where the coding agent has native console and debugger access), and cloud agent platforms (Codex). These aren't variations on the same product. They're different products solving different slices of the development workflow. The IDE as a monolithic category is splitting apart.
For builders, the actionable takeaway: don't commit to one tool. The category is moving too fast. Use Claude Code for deep work, Cursor for visual prototyping, and keep an eye on the browser-as-IDE pattern. The tool that wins will be the one that matches how you actually think, not the one with the most features.
Section Deep Dives
Security
Axios npm supply chain attack attributed to North Korean threat actor, 100M weekly downloads exposed. A North Korean threat actor (UNC1069/Sapphire Sleet) compromised the axios npm maintainer account and published backdoored versions (1.14.1 and 0.30.4) with a cross-platform RAT via a malicious "plain-crypto-js" dependency. The exposure window was roughly 3 hours. Google, Microsoft, Elastic, and Palo Alto Networks all published analyses. If you pulled axios between 00:21 and 03:20 UTC on March 31, audit immediately.
Prompt injection success rate exceeds 85% against SOTA defenses when adaptive strategies are used. A meta-analysis of 78 studies published in ScienceDirect catalogs 31 distinct attack techniques including protocol-level attacks specific to MCP. A human-in-the-loop defense layer improves protection to 91.5%. An open-source agent firewall with an 11-layer pipeline shows the most promise for production defense. The takeaway: automated defenses alone aren't enough. HITL is still mandatory for high-stakes agent operations.
Git worktrees need runtime isolation for parallel AI agent development. Penligent research shows worktrees alone are insufficient for parallel agent execution. Shared ports, databases, caches, test state, and environment variables leak across parallel sessions. Environment variable leakage is the biggest blind spot. Defense-in-depth required: isolation + resource limits + network controls + permission scoping.
CEL now() for dependency cooldown periods. SafeDep published a practical defense against supply chain attacks: use CEL time-based policies to refuse any package published within a configurable cooldown window. Most supply chain attacks rely on speed. A 24-72 hour cooldown catches the majority while allowing legitimate updates through. Simple and effective.
SlowMist releases agent-facing zero-trust security skill. SlowMist's open-source framework is designed to be read and deployed BY the AI agent itself, not just human operators. Five-layer security covering pre-execution checks, execution-time constraints, and post-execution review. This is a paradigm shift: security that runs inside the agent's reasoning loop.
Agents
Microsoft 365 federated MCP connectors reach GA with Canva, HubSpot, Google Calendar. Microsoft is bringing federated Copilot connectors using MCP to GA in M365 Copilot in April. Admin rollout completes April 20. Data stays in third-party systems with no indexing in Microsoft services. This is the largest MCP deployment surface in enterprise software.
OpenClaw CVE-2026-32922: CVSS 9.9 privilege escalation, 135K exposed instances. A critical privilege escalation in OpenClaw's device.token.rotate allows any paired device to mint tokens with unrestricted admin+RCE permissions. 340K+ GitHub stars, 135K+ publicly exposed instances, 63% running without authentication. Fixed in OpenClaw 2026.3.11. Patch now.
Ai2 releases MolmoWeb: open-source visual web agent with 30K human trajectories. Allen Institute for AI released MolmoWeb in 4B and 8B parameter sizes. It navigates browsers by interpreting screenshots, not DOM/HTML. The release includes 30K human task trajectories, 590K subtask demonstrations, and 2.2M screenshot Q&A pairs. Outperforms all open models on web agent benchmarks. Full weights, data, and eval tools released openly.
Anthropic secretly testing Conway: an always-on agent with extensions and webhooks. Leaked code reveals Anthropic is building Conway, featuring persistent instances, custom extensions (.cnw.zip packages), webhook-based triggers from external services, and Chrome browser integration. Always-on agents that respond to external events autonomously. Significant strategic shift from chat-first.
Research
Reasoning trace inversion detects when LLMs answer the wrong question. IBM researchers show that reasoning model hallucinations can be reinterpreted as the model answering a different question than asked. By inverting the reasoning trace, the framework detects misalignment between query and response. Practical signal for abstention in deployed reasoning systems.
No single best model for diversity: router-based approach selects models per-prompt. Researchers introduce diversity coverage as a metric and show no single LLM maximizes it across prompt types. Their learned router selects the best model per-prompt from a pool, outperforming any individual model. Practical for multi-model deployments where output variety matters.
De Jure: fully automated pipeline extracts regulatory rules from legal documents without annotation. A four-stage pipeline converts dense hierarchical legal text into machine-readable JSON rules using iterative LLM self-refinement. Zero human annotation, no domain-specific prompting. Practical for any team building compliance automation.
LLMs optimize database query execution plans, 4.78x speedup. Together AI and Stanford research shows LLMs can rewrite query execution plans by correcting cardinality estimation errors. On TPC-DS, the LLM-optimized plan pruned 15.1M rows to 2.9M, cutting memory from 3.3GB to 411MB. No database engine modifications required.
Infrastructure & Architecture
Mintlify replaces RAG with a virtual filesystem for AI documentation, 46s to 100ms. Mintlify built ChromaFs, a virtual filesystem that intercepts UNIX commands and translates them into Chroma vector database queries. Session creation dropped from ~46 seconds to ~100ms with zero marginal per-conversation compute. 312 points on HN. This is a concrete, working alternative to traditional RAG that lets agents explore docs like a codebase.
Half of planned US data center builds delayed or cancelled. Bloomberg reports only ~4GW of expected 12GW is under active construction. High-power transformer lead times stretched from 24-30 months to up to 5 years. Despite delays, Alphabet, Amazon, Meta, and Microsoft still expect to spend over $650B on AI capacity in 2026. The physical world is throttling AI scaling.
Meta, Microsoft, and Google building massive natural gas plants for AI data centers. TechCrunch reports all three are betting on new gas infrastructure to power AI compute. Long-term economics of gas plants are questionable, but they need power now. Community opposition to data centers is becoming a material deployment bottleneck, with new polling showing Americans prefer Amazon warehouses as neighbors over data centers.
Tools & Developer Experience
Claude Code v2.1.92: Bedrock setup wizard, per-model cost breakdowns. Released April 4, the update adds an interactive Bedrock setup wizard for AWS users, per-model cost breakdowns to see what each model costs per session, and an interactive version picker for /release-notes. The /tag and /vim commands were removed.
OpenRouter launches Model Fusion: run multiple models and synthesize best answer. OpenRouter's new experiment queries multiple AI models simultaneously, analyzes each output, and synthesizes a single response. In their testing, every Deep Research agent preferred the fused result to its own output. No paid sub required. Per-query cost is higher due to multiple invocations plus synthesis.
Ralph Claude Code: autonomous development loop with intelligent exit detection, 8.4K stars. Ralph chains multiple Claude Code sessions together for long-running tasks. It re-feeds prompts when Claude exits prematurely and detects genuine completion vs. hitting a wall. Lightweight alternative to custom orchestration for multi-step workflows.
Models
DeepSeek V4 launch reportedly imminent. The Information reported the launch is delayed by rewriting code for Huawei Ascend and Cambricon chips. The model is ~1T parameter MoE with ~37B active parameters, 1M-token context, and native multimodal generation. A "V4 Lite" appeared on DeepSeek's website March 9. Full release expected within weeks.
GPT-IMAGE-2 appears on LMArena under three codenames. An r/singularity post with 421 upvotes reports what appears to be OpenAI's unreleased image model operating under codenames "maskingtape-alpha," "gaffertape-alpha," and "packingtape-alpha." Early testers describe it as far better than previous models. No official announcement yet.
Gemma 4 31B matches Gemini 3 Deep Think on complex security puzzle. An r/LocalLLaMA post shows Gemma 4 correctly identifying a deliberately unwinnable security paradox that Gemini 3 Deep Think completely fell for. The intelligence gap between frontier closed models and open-weight alternatives continues to narrow on reasoning tasks.
llama.cpp merges Gemma 4 tokenizer fix. PR #21343 fixes a bug where "\n\n" was split into two tokens instead of one trained single token. C++-only fix, no GGUF regeneration needed. Silently degraded longer conversations. Critical fix for anyone running Gemma 4 locally.
Vibe Coding
Anthropic cuts off Claude subscriptions for third-party tools like OpenClaw. Starting April 4 at 12pm PT, subscribers can't use their limits for third-party agentic tools. Boris Cherny said subscriptions "weren't built for the usage patterns of these third-party tools." Users get a one-time credit equal to one month's sub (redeemable by April 17) and up to 30% on pre-purchased usage bundles. The shift from technical blocks to billing enforcement completes a four-month arc.
Stagewise redefines browser as development environment. Stagewise (6,515 stars, YC S25) evolved from a browser extension into a purpose-built developer browser with native console and debugger access across all tabs. Developers select elements, describe changes, and the agent edits codebases directly across React, Vue, Angular, Svelte, Next.js, and Nuxt. New category: the browser itself as IDE.
Snyk RSAC report: 48% of AI-generated code contains vulnerabilities. Snyk's 2026 report found AI-driven development creates 2-10x more vulnerabilities per developer, 80% of developers bypass security policies, only 10% scan most AI-generated code, and 75%+ falsely believe AI code is more secure than human-written. The vibe coding security debt is real and accumulating at scale.
Hot Projects & OSS
Serena: MCP coding toolkit reaches 22.5K stars with Microsoft sponsorship. Serena turns any LLM into a coding agent with IDE-level capabilities. Symbol-level code extraction gives dramatically better token efficiency than raw file reads. VS Code team and Microsoft Open Source Programs Office are sponsors.
Google Workspace CLI with AI agent skills hits 23.7K stars. A single Rust-based CLI covering Drive, Gmail, Calendar, Sheets, Docs, Chat, Admin, and 50+ APIs. 40+ AI agent skills. Gained 4,900 stars in three days. Guillermo Rauch called it proof that "2026 is the year of Skills & CLIs."
Onyx: open-source AI platform with agentic RAG surges to 23.6K stars. Onyx combines conversational AI, agentic RAG, deep research reports, code sandbox execution, voice mode, and MCP integration across 50+ data connectors. MIT-licensed. Consolidates capabilities that typically require multiple tools.
Workmux: git worktrees + tmux for parallel agent development, 1.2K stars. Workmux automates managing multiple git worktrees with matching terminal windows. Single command creates worktree + window + pane layout. Includes AI agent dashboard for monitoring parallel branches. The right tool for running multiple coding agents simultaneously.
SaaS Disruption
SaaS credit markets pricing AI disruption: UBS warns default rates could hit 13%. Credit markets are systematically pricing AI disruption into SaaS company debt. The BIS published a quarterly review with $600-750B in private credit loans underwritten against a thesis AI is now dismantling. Software stocks collapsed ~30% between October 2025 and February 2026. Then the narrative flipped: a Goldman Sachs survey found 49% of institutional allocators plan to increase software exposure, the highest since 2017.
Google completes $32B Wiz acquisition, launches agentic SOC. The largest acquisition in Google's history is done. Wiz (trusted by 50% of the Fortune 100) plus Google's threat intelligence. Agentic SOC features include AI triage agents, dark web intelligence agents with 98% accuracy, and M-Trends 2026 data showing adversary handoffs in 22 seconds. At 22 seconds, human SOC analysts can't respond fast enough. AI-native security operations become mandatory.
47 seed-stage companies hit unicorn status in Q1 2026. Crunchbase data shows this pace would produce nearly 200 early-stage unicorns for the full year, versus roughly 100 in 2025. AI and defense tech are driving valuations. The AI-native SaaS replacement wave is creating billion-dollar companies faster than any previous technology cycle.
Policy & Governance
Washington State signs first US AI chatbot minor protection law. Governor Ferguson signed HB 2225, banning manipulative engagement, requiring chatbots to disclose they aren't human, mandating crisis resource referrals for self-harm expressions, and prohibiting sexually suggestive content to minors. Companion bill HB 1170 requires AI-generated media to include provenance watermarks. Both take effect 2027-2028.
78 AI chatbot bills active in 27 states. The Transparency Coalition tracker shows Georgia's SB 540 approved by the House March 25, Colorado's HB 1263 heading to the House floor, California's SB 867 hearing April 6. Regulatory momentum is accelerating nationwide.
Anthropic forms AnthroPAC as AI industry midterm spending tops $300M. Anthropic filed for a PAC with voluntary employee contributions capped at $5K. Separately, they funded Public First Super PAC with at least $20M. Total AI industry political spending for the 2026 midterms exceeds $300M. Every major AI lab is now playing politics directly.
Newsom signs executive order on AI governance. The California order requires state agencies to implement safety, privacy, and bias audits for AI procurement. Because most major AI companies build for California's market, these rules become de facto national standards.
Skills of the Day
-
Use CEL time-based policies to block fresh npm packages. Add a 24-72 hour cooldown window in your dependency scanner using Common Expression Language
now()functions. Most supply chain attacks like the axios compromise rely on speed, and this catches the majority before they reach your CI pipeline. SafeDep has the implementation guide. -
Switch from semantic chunking to recursive 512-token chunking in your RAG pipeline. Vectara's 2026 benchmark across 50 academic papers shows recursive character splitting at 512 tokens with 10-20% overlap scores 69% end-to-end accuracy vs. semantic chunking at 54%. Semantic chunking's high recall (91.9%) produces fragments too small for LLMs to reason over.
-
Quantize your embedding models to INT8 for CPU-only RAG deployment. INT8 quantization cuts memory by 75% with under 1% quality loss on standard benchmarks. Combined with MRL dimension reduction (384 dims), you get 16x compression. Batch your embeddings, process them in groups of 32-64, because processing one at a time wastes compute cycles.
-
Add per-head adaptive KV cache quantization to your llama.cpp inference. The bottom 2% of attention heads by entropy ("sink heads") contribute disproportionately to quantization error. Skipping just 3 of 144 heads outperforms optimal bit redistribution. A 33K-token conversation ran at 36.5 tok/s where f16 would OOM on 12GB VRAM. Follow the llama.cpp issue.
-
Use GRPO instead of supervised fine-tuning for reasoning tasks. Group Relative Policy Optimization generates multiple responses per prompt, groups them, and normalizes rewards: advantage = (reward - mean) / std. No reward model needed. Combined with complexity-aware data selection (same accuracy with 11% of training data), you can fine-tune a 7B reasoning model for under $1K via QLoRA on consumer hardware.
-
Replace RAG with a virtual filesystem for documentation-heavy AI assistants. Mintlify's ChromaFs approach intercepts UNIX commands (grep, cat, ls, find) and translates them to vector DB queries. Session creation dropped from 46 seconds to 100ms. If your users are AI agents, give them filesystem semantics instead of search bars.
-
Treat every MCP server endpoint as a network-exposed API. Add authentication, rate limiting, and input validation to all MCP servers in your stack. The 30+ CVEs in 60 days proves the ecosystem's default-open posture is a liability. The PraisonAI cluster shows even sandbox layers can't save you if the auth layer is missing.
-
Use cross-encoder reranking as a second pass in your RAG retrieval. Letta's tiered memory pattern shows that vector similarity alone misses important context. A second-pass model that re-scores similarity candidates by cross-encoding the query and document together catches semantic matches that embedding distance misses. Simple composable patterns outperform complex frameworks.
-
Isolate parallel AI agent sessions beyond git worktrees. Worktrees handle code isolation but leak ports, databases, caches, and environment variables. Use unique port ranges per worktree, separate database instances (or prefixed schemas), and explicitly scrub env vars. Tools like Workmux automate the worktree management, but you still need to wire the runtime isolation yourself.
-
Run Gemini Embedding 2 for new RAG projects starting today. It leads the 2026 benchmark at 1605 ELO, is the only model completing the full 4K-32K context range with perfect scores, and supports 5 modalities. For budget-constrained setups, Jina v4 with MRL training loses less than 1% accuracy at 256 dimensions, giving you 16x compression with INT8 quantization.
Built by MindPattern. Curated by Tayler Ramsay. Reply to let me know what's working, what's not, or what I'm missing.
How This Newsletter Learns From You
This newsletter has been shaped by 12 pieces of feedback so far. Every reply you send adjusts what I research next.
Your current preferences (from your feedback):
- More builder tools (weight: +2.5)
- More agent security (weight: +2.0)
- More agent security (weight: +1.5)
- More vibe coding (weight: +1.5)
- Less market news (weight: -1.0)
- Less valuations and funding (weight: -3.0)
- Less market news (weight: -3.0)
Want to change these? Just reply with what you want more or less of.
Ways to steer this newsletter:
- "More [topic]" / "Less [topic]" — adjust coverage priorities
- "Deep dive on [X]" — I'll dedicate extra research to it
- "[Section] was great" — reinforces that direction
- "Missed [event/topic]" — I'll add it to my radar
- Rate sections: "Vibe Coding section: 9/10" helps me calibrate
Reply to this email — I've processed 8/12 replies so far and every one makes tomorrow's issue better.