Visualização de leitura

A year of Apple Security Bounty research — 16 closed findings, full disclosure

Spent 2024–2025 filing Apple Security Bounty reports. All 16 are now closed. I've written up every one — including the ones Apple were right to reject, the ones where my own PoC was lying to me, and the few where I couldn't bridge the gap between binary evidence and a working exploit. No hype, no CVE-farming. 
submitted by /u/Prize-Unlucky
[link] [comments]

AI-Coded App Vulnerability Checklist - 33 LLM-specific items with detection methods

Recently saw a post '20 common AI-coded app vulnerabilities', and thought to myself that 20 is nice but very optimistic, as an avid AI user for years now I personally saw more than 20 on every project that was not ai-written in a targeted manner, but as huge chunks. So, I got my good friends Claude, ChatGPT, Gemini and Grok to help me throw few more into it. Initial thought was to package as a vulnerability scanner, but... would rather not even attempt to earn on vulnerabilities and instead encourage users to run audits for keeping all free, open source and with an ability to contribute. And here it is:

Open source checklist of 258 vulnerabilities common in applications built with AI coding assistants. 17 categories. Detection method ([S] static, [R] runtime, [C] config) and severity rating on every item.

The part that isn't in existing references - Category 6, 33 items specific to LLM integration. Some of the less-obvious ones:

6.26 - MCP tool poisoning: attacker-controlled MCP server injects instructions into tool results the agent reads as trusted input. Detection: static analysis of MCP server config plus runtime inspection of tool result handling before prompt injection.

6.27 - Agent memory poisoning: malicious content written to long-term memory (vector DB, key-value store, file) is retrieved in a future session and executed in context. Detection: audit memory write paths for content validation before storage.

6.30 - Cross-agent prompt injection: orchestrator passes Agent A's output as Agent B's input without sanitization or trust boundary. Detection: static analysis of multi-agent orchestration code.

6.31 - Insecure agent handoff: parent agent passes full API keys/session tokens to sub-agents rather than scoped credentials with minimum required permissions.

Companion prompt.md runs all 258 checks against a codebase using Claude Code or any capable LLM CLI. Returns file paths, line numbers, code snippets, specific remediations.

Apache 2.0. license - so anyone willing to do anything around this are open to do so.

submitted by /u/6biz
[link] [comments]

MyAudi app:Security issues in Audi Connected Vehicle experience

I recently published a security research post on the myAudi connected vehicle platform. I found that anyone with a VIN can access a sensitive informations about car and ownership
I think the topic is useful beyond Audi itself, because many vendors now rely on these “connected vehicle” platforms and mobile apps, often with very similar architectures and assumptions

submitted by /u/decoder-ap
[link] [comments]

ShinyHunters / AT&T ransom payment traced on-chain — paper draft, seeking arXiv cs.CR endorsement

Across all major ShinyHunters campaigns (AT&T/Snowflake, Salesforce, Canvas/Instructure), only one event has both a publicly stated payment amount and a known approximate settlement date: the May 2024 AT&T payment of ~5.7 BTC (~$370K), confirmed by Wired but never published with a transaction hash. I use that as the analytical anchor for an end-to-end on-chain analysis using only free public data.

Pipeline (5 stages):

  1. BigQuery bulk filter on amount and time window → 500 candidates.
  2. Recipient profiling via Blockstream Esplora (lifetime tx count, spend shape).
  3. Sender-side cluster analysis using common-input ownership; looking for broker-aggregation patterns.
  4. Depth-12 concurrent forward trace, top-K=4 fan-out.
  5. Terminal attribution via OKLink, BitInfoCharts, WalletExplorer.

Result:

A single highest-fit candidate: 5.71997804 BTC paid 2024-05-17 22:04 UTC to a fresh recipient, spent in 6 min, laundered through a 6-cycle automated peel chain, terminating at an exchange deposit cluster. Funding side shows broker-aggregation fingerprint (4× 1.147 BTC peels in a 90-min window pre-payout). Upstream hub addresses appear reused across multiple victims of the same laundering service, active through 2025. Paper closes with the legal pathway from chain endpoint to indictment and a scoped compliance-request template.

Limitations (explicit in §5):

Ranking under a scoring scheme, not positive ID. No off-chain ground truth. Documented OKLink vs. Arkham label conflict on the dominant terminal, resolved via behavioural audit. No formal null-distribution analysis yet. Score weights are author judgements.

Asking for:

  1. Technical feedback / methodology critique.
  2. arXiv cs.CR endorsement — endorsement code: ZQXBSQ

    github.com/tr4m0ryp/shinyhunters-gotta-catch-em-all/blob/main/Gotta_Catch_Em_All_ShinyHunters.pdf

Tooling and dataset released for reuse

submitted by /u/Visual_Course6624
[link] [comments]

The compression of the exploit timeline: Why n-day gaps and 90-day embargoes are failing in practice.

The traditional vulnerability disclosure timeline relies on a fundamental assumption: exploit development and vulnerability discovery take time. Over the last 12 months the integration of LLMs into offensive tooling has demonstrably broken this assumption.
I recently published a technical write-up arguing that the 90-day disclosure window is effectively dead backed by three specific observations from recent incidents:

  1. Automated Diff Analysis (30-minute n-days) : The safety net between a patch release and an in-the-wild exploit is gone. Taking a recent React security patch (CVE-2026-23870), I used an LLM to analyze the diff, identify the vulnerable path, and write a working DoS PoC in roughly 30 minutes. The human reverse-engineering bottleneck has been bypassed.
  2. Vulnerability Convergence : I recently reported a critical P0 to a vendor and was told I was the 11th reporter in 6 weeks. LLM assisted scanners are causing independent researchers to converge on the same bugs simultaneously. An embargo no longer contains the vulnerability; it simply provides a head start to whichever threat actor also found it.
  3. The Linux Kernel (Copy Fail & Dirty Frag) : The recent kernel exploits highlight this perfectly. Copy Fail (CVE-2026-31431) went from an automated AI scan to a public PoC to nation state weaponization in days. Shortly after the embargo for Dirty Frag (CVE-2026-43284 / CVE-2026-43500) was broken in hours because an unrelated third party independently discovered the same bug class using similar tooling.

The defense cannot operate on monthly cycles when the offense is operating in hours. The focus needs to shift to real-time, PR-level AI scanning to match the pace.
can read the full technical breakdown and case studies on my blog:https://blog.himanshuanand.com/2026/05/the-90-day-disclosure-policy-is-dead/

I am curious if the researchers here are experiencing similar convergence rates or if you view this as a temporary anomaly while legacy codebases are scanned with new tools.

submitted by /u/unknownhad
[link] [comments]

Memory Poisoning AI Agents via ChromaDB

Built a self-contained PoC (using Claude Code) demonstrating memory poisoning against an AI agent with persistent vector memory.

The attack

An adversary with write access to the ChromaDB directory injects a crafted entry with realistic metadata (session_id, backdated timestamp, authoritative source tag). The payload is semantically close to queries the agent will receive, so it ranks at the top of retrieval results. The agent treats it as fact. No prompt injection. No jailbreak.

The hard part to detect

Nothing anomalous in the logs. The poisoned entry looks identical to a legitimate memory in retrieval output.

The PoC shows two mitigations

  • HMAC signing over content + metadata — unsigned entries rejected before reaching the LLM
  • Source scoping aka cross-session injections filtered at retrieval time

Stack:

ChromaDB, all-MiniLM-L6-v2 via fastembed (ONNX), pure Python stdlib for the HMAC defense. Runs fully offline, no API keys.

Blog post: https://mamtaupadhyay.com/2026/05/09/agent-memory-poisoning-demo/
Code: https://github.com/m-pentest/memory-poisoning-demo/
Demo Video: https://youtu.be/Pb46i3ZLK8g

submitted by /u/Big_Impression_410
[link] [comments]

Technical Analysis of EagleSpy V6.0 (CraxsRAT Rebrand) Distributed Through Odysee and Telegram

I recently investigated an individual operating through Odysee and Telegram who is selling a malicious Android RAT known as EagleSpy V6.0, which appears to be a rebranded version of CraxsRAT.

During the investigation:

\- I was financially scammed after payment

\- The seller blocked communication afterward

\- The malware infrastructure was analyzed in detail

Technical analysis confirmed:

\- Banking phishing overlays

\- Crypto wallet credential theft

\- Telegram bot exfiltration

\- Remote shell execution

\- Keylogging

\- Camera/microphone access

\- GPS tracking

\- Ransomware components

\- DEX packers for AV evasion

\- Hidden update/backdoor mechanisms

The repository also contained evidence of real victim infrastructure and compromised device information.

The malware appears capable of targeting not only victims, but potentially even buyers/operators through embedded update systems and hidden control mechanisms.

Relevant reports have already been submitted to platform abuse teams.

Odysee channel involved:

https://odysee.com/@justicerat:e

Telegram:

@JustIcedevs

This post is intended purely as a cybersecurity awareness warning to help prevent additional victims.

If moderators require technical validation or indicators of compromise, I can provide structured analysis details privately.

submitted by /u/CranberryOk2634
[link] [comments]
❌