bebop404

HKCERT CTF 2023: Secure Python 2

Bypassing a Python AST sandbox with Unicode fullwidth lookalikes applied as decorators — exec and input slip through the keyword filter undetected.

This challenge places us inside a heavily restricted Python sandbox where dangerous functionality like exec, __import__, and even input has been blocked. Instead of simply blacklisting keywords, the sandbox takes things a step further by analyzing the code's Abstract Syntax Tree (AST) and transforming or rejecting nodes that contain forbidden calls. On paper, that sounds much more secure than basic string filtering.

But as always in CTFs, the interesting part isn't what's blocked — it's what's overlooked.

The key detail here is how the sandbox treats decorators. While direct calls to exec() or __import__() are detected and rejected during AST inspection, decorators are parsed in a slightly different context. If the filtering logic isn't carefully implemented, decorator expressions may slip through without triggering the same checks. That subtle difference becomes the entire attack surface.

Another important piece of the puzzle is Python 3's support for Unicode identifiers. Python allows non-ASCII characters in variable and function names. That means we can create identifiers that visually look identical to exec or input, but are actually different Unicode characters under the hood. For example, replacing the normal ASCII letters with their fullwidth Unicode counterparts creates names that appear the same to a human reader but don't match the blacklist rules.

Putting these ideas together gives us a clean bypass. Instead of calling exec directly, we use Unicode lookalikes and apply them as decorators to a class definition:

@exec
@input
class X:
    pass

To the sandbox's keyword filter, these aren't the forbidden exec and input. They're different identifiers entirely. However, when interpreted by Python, they still resolve to the actual built-in functions. Because decorators are executed when the class is defined, this effectively triggers input and then passes its result into exec.

At that point, we simply provide the payload as user input:

__import__("os").system("cat flag.txt")

The input decorator reads our payload, and the exec decorator executes it. Since the AST filter only guarded against direct usage in the original source code, the injected string bypasses the protection entirely and executes as normal Python. From there, importing os and calling system gives us command execution, allowing us to read the flag.

This challenge highlights a classic sandboxing lesson: AST filtering is stronger than naive string blacklists, but it still depends entirely on implementation details. If certain syntax constructs — like decorators — are handled differently, they can become unexpected escape vectors. Combined with Unicode identifier tricks, it becomes surprisingly easy to slip past keyword-based defenses.

In the end, the sandbox wasn't broken by brute force — it was broken by understanding how Python parses and executes code.

Challenge source

hkcert-ctf-2023-challenges/45-secure-python2