Introduction
In this chapter, we'll quickly go through all the easy series TypeChallenges.
I'll share my thought process, code, and some additional notes.
Warm-up
Let's start with a warm-up problem labeled as warm difficulty.

programming, life, and everything
In this chapter, we'll quickly go through all the easy series TypeChallenges.
I'll share my thought process, code, and some additional notes.
Let's start with a warm-up problem labeled as warm difficulty.
Due to the large number of medium series questions, in order to facilitate everyone to check, I provide a navigation here for everyone to check.
It was 2 AM. The Leverage OJ frontend had been happily serving pages for hours, then something caused it to crash. A quick restart later, every route returned the default Nuxt welcome screen:
Remove this welcome page by replacing
<NuxtWelcome />in app.vue with your own code...
Last week I wrote about the threat model for running student code in AWS Lambda. This week we built the thing and tried to break it.
The result: sandbox_exec, a 224-line C program that wraps student submissions in a seccomp-bpf filter, enforces resource limits, and passes the 5-round red-team gauntlet.
Last week we shipped sandbox_exec — a 224-line C program using seccomp-bpf to isolate student code in AWS Lambda. The honest answer at the time was: "WASM would be cleaner, but the Python ecosystem isn't there yet."
This week we measured exactly what "the Python ecosystem isn't there yet" costs in milliseconds. The answer is more nuanced than expected.
This project is still under development!
Type Challenges is a project that aims to provide a collection of type challenges with the goal of helping people learn TypeScript.
A few months ago I started a serious code review of Leverage, a NestJS Online Judge platform that had been running in production for years. No tests. No linter enforcement. No formal review process. Just code that had grown organically, feature by feature, under deadline pressure.
I came out of it with 29 documented issues. Some were minor style things. Six of them were the kind of bugs that make you stare at the screen for a moment and think "how has this been running?"
CUDA Agent: Large-Scale Agentic RL for High-Performance CUDA Kernel Generation
ByteDance Seed + Tsinghua AIR (SIA-Lab), 2026
cuda-agent.github.io
Writing fast GPU kernels is genuinely hard. You need to understand memory hierarchy, warp scheduling, bank conflicts, tensor core layouts, and about fifty other microarchitectural details that change between GPU generations. Most engineers — including most ML engineers — don't have this knowledge. They use libraries (cuBLAS, cuDNN, FlashAttention) and hope for the best.
Authentication is one of those things that feels solved — until you inherit a codebase where it isn't. When I started the Leverage OJ rewrite, the auth system was three separate problems wearing a trench coat: a session setup that broke under PM2, a ContestUser concept that had diverged into its own parallel auth universe, and a password hashing scheme that was one config leak away from a full credential dump.
The submission pipeline is the critical path of an Online Judge. A student submits code, it goes into a queue, a worker picks it up, sends it to the judge, waits for results, writes them back. Simple in theory. The original Leverage implementation was a custom queue built on Redis Lists — and it had problems that only showed up when things went sideways.