Introduction: The Closing Window
General purpose computing is a historical anomaly. For a few decades — roughly from the late 1970s to the early 2020s — ordinary people had access to machines they could program, inspect, and modify without asking anyone’s permission. This was unprecedented. For the first time in history, an individual could acquire a device capable of performing any computation, running any software, and communicating with anyone, all without institutional approval. The consequences were extraordinary: entire industries created in garages, political movements organized on open networks, scientific discoveries made by amateurs with access to the same computational tools as universities. General purpose computing was the most significant democratization of knowledge and capability in human history.
That era is coming to an end.
A convergence of corporate and government interests is systematically replacing user-controlled computing with centrally managed infrastructure. The trajectory is visible at every layer of the stack: hardware that phones home to manufacturers before it will boot, operating systems that refuse to run unapproved software, app stores that serve as licensing chokepoints for what users may install, and regulatory frameworks that mandate all of the above. Each of these changes is presented as a security improvement. Taken together, they constitute a structural power shift away from users and toward a small number of corporations and governments.
The costs of this shift extend far beyond technical inconvenience. Centralized compute control destroys innovation by eliminating the individual’s ability to experiment outside sanctioned channels — the garage startup, the unauthorized fork, the hobbyist project that becomes an industry. It interferes with political processes by placing the infrastructure of communication and organization under state and corporate authority — when a government can remove an app from a store, it can silence a movement. And it creates pervasive fear among developers, researchers, and ordinary users, who learn that their tools are not truly theirs and that unauthorized use carries professional and legal risk. The chilling effect is difficult to measure and impossible to overstate.
There is a revealing asymmetry in who these restrictions actually burden. Sophisticated criminals, state-sponsored hackers, and terrorist organizations have both the incentive and the means to circumvent app store controls, attestation requirements, and centralized update mandates. The people who cannot circumvent them — and who bear the full weight of their costs — are ordinary users, small businesses, independent developers, and political dissidents. If the restrictions were genuinely about security, their failure to impede the most dangerous actors would be a scandal demanding reform. Instead, the restrictions persist and expand, because the actors they do successfully constrain — competitors, innovators, and citizens — are precisely the ones that monopolistic corporations and authoritarian governments wish to constrain.
The “security” justification for all of this is largely theater. CrowdStrike’s catastrophic global outage in July 2024 demonstrated what security researchers have warned about for years: centralized update authority is itself a single point of catastrophic failure. When one vendor with kernel-level, auto-deploying update authority over millions of machines pushes a bad update, the result is not security but systemic fragility on a scale that no collection of individual user mistakes could produce. The architecture being built in the name of security makes the entire computing ecosystem more brittle, not less.
The critics who warned us were prescient, systematic, and ignored until vindicated. Richard Stallman and the Free Software Foundation warned for decades about proprietary software, tivoization, and remote kill switches. Ross Anderson at Cambridge identified Microsoft’s “Palladium” Trusted Computing initiative as a control mechanism as early as 2003, and described exactly the threat model that is now materializing two decades later. Cory Doctorow framed the “war on general purpose computing” as the defining political issue of the digital age and coined “enshittification” to describe the platform degradation dynamics driving it. Bruce Schneier warned about the systemic risk of centralized IoT and infrastructure control. The EFF and ACLU have fought government compulsion of software update channels as surveillance vectors. The pattern is consistent: technically correct warnings were dismissed as paranoid until the predicted outcomes materialized.
The thesis of this essay is that centralization is neither accidental nor purely technical. It serves regulatory capture, surveillance, and barrier to entry. Its costs in human freedom and economic dynamism vastly exceed any security benefits claimed. And there are concrete, buildable countermeasures available to those willing to use them.
The Regulatory Capture Architecture
The regulatory landscape driving computing centralization operates through distinct but converging mechanisms in the United States and Europe. Despite different legal philosophies, the practical endpoint is the same: vendor-controlled, government-accessible infrastructure.
US federal mechanisms rely primarily on covert compulsion. National Security Letters, FISA court orders, and CALEA wiretapping mandates operate under gag orders and secrecy provisions that prevent public accountability. Companies receiving NSLs cannot disclose their existence. FISA court proceedings are classified. CALEA, originally designed to ensure law enforcement access to telephone networks, provides the legal template for extending compelled access to software distribution channels. The mechanism is legally obscured by design — citizens cannot resist obligations they cannot see.
US state-level fragmentation creates a different but equally damaging dynamic. California’s CCPA/CPRA functions as a de facto national privacy standard because companies find it easier to comply globally than to geo-fence California residents. New York’s SHIELD Act, Illinois’s BIPA with its aggressive private right of action for biometric data, and cascading privacy laws in Washington, Texas, and other states create a compliance patchwork that only large incumbents can navigate efficiently. The aggregate burden is functionally equivalent to the EU’s ex-ante regulatory model, but without the uniformity — arguably worse for smaller developers, who face the same compliance costs without a single clear standard.
The European Union takes a fundamentally different approach: ex-ante prescriptive compliance, where system architecture is mandated before market entry. GDPR dictates data handling architecture. The Digital Services Act and Digital Markets Act impose structural obligations on platforms. The Cyber Resilience Act mandates vendor-controlled update obligations and vulnerability disclosure with CE marking for software. NIS2 extends critical infrastructure software update mandates. The EU AI Act adds compliance architecture requirements with update and change notification obligations. Each of these frameworks requires vendors to architect systems in specific ways as a condition of market access, and the compliance costs of meeting them structurally favor incumbents over smaller competitors and open-source maintainers.
The Brussels Effect is the mechanism by which EU regulation becomes global without any democratic process outside Europe. Because single market access requires full compliance, and because maintaining separate US and EU product versions is economically irrational above a threshold scale, vendors architect globally to EU standards rather than maintain two codebases. GDPR already demonstrated this: most US companies adopted global data handling changes rather than geo-fence European users. The Cyber Resilience Act will follow the same pattern. If the EU mandates centralized, vendor-controlled, certified update pipelines, US products will comply globally — not because Congress legislated it, but because it is cheaper than building two versions. This is arguably the most significant vector by which European regulatory preferences become embedded in global software infrastructure, affecting American users without American legislative action.
The convergent endpoint is identical regardless of the legal path. US covert compulsion and EU overt mandate produce the same practical outcome: a small number of large vendors controlling update infrastructure that is accessible to governments. Whether the mechanism is a secret FISA order or a public CRA certification requirement, the architecture is the same — centralized, vendor-managed, government-reachable. The regulatory capture is bilateral: incumbents co-author the compliance frameworks they can meet and competitors cannot, while governments acquire surveillance infrastructure built into the commercial supply chain.
The Corporate Centralization Architecture
The corporate drive toward centralized control operates at every layer of the computing stack, from silicon to application distribution, and follows a consistent pattern: each change is framed as a security improvement while serving to maximize ecosystem disruption and entrench incumbent control.
The hardware control layer operates beneath the operating system, invisible to most users. Intel’s Management Engine and AMD’s Platform Security Processor are always-on subsystems embedded in every modern x86 processor, running their own firmware with full memory access, network capability, and remote management features — all below the operating system’s awareness or control. TPM 2.0 chips and Microsoft’s Pluton processor provide attestation infrastructure, enabling a device to cryptographically prove what software it is running to a remote party. UEFI Secure Boot, replacing the user-configurable BIOS of earlier machines, establishes a chain of trust from firmware to operating system that defaults to vendor-controlled signing keys. Together, these technologies implement Ross Anderson’s 2003 warning about Trusted Computing almost exactly as he described it: hardware that answers to the manufacturer rather than the owner.
The operating system layer extends hardware control into software distribution. Microsoft’s Windows 11 requires TPM 2.0, mandates online account creation, and pushes users toward Windows S Mode where only Microsoft Store applications can be installed. Apple’s iOS has always been a walled garden; macOS now requires notarization for all distributed software, with Apple retaining the ability to revoke approval at any time. Google’s Play Protect on Android and Chrome OS’s managed update model complete the picture. In each case, the user’s ability to run software of their choosing is mediated by a corporate gatekeeper with the technical and legal authority to refuse.
The Linux subversion is perhaps the most strategically significant development, because Linux was supposed to be the alternative. The open-source operating system that powered the server revolution and provided a refuge for users seeking control over their own machines is being systematically captured from within. systemd, originally an init system replacement, has absorbed logging (journald, replacing Unix’s plaintext logs with a binary format requiring specialized tooling), networking (networkd), DNS resolution (resolved), user and home directory management (homed), boot loading (systemd-boot), and device management (udev). It is now effectively mandatory on every major Linux distribution, creating a single, opaque, corporate-controlled layer managing most system functions. Wayland’s replacement of X11 broke decades of working applications, tools, remote desktop infrastructure, and accessibility software — compatibility destruction framed as a security improvement, when application isolation could have been implemented as X11 extensions without a clean break. The push to introduce Rust into the Linux kernel, over significant objection from senior maintainers, creates a dependency on a language whose foundation is governed by Microsoft, Google, and Amazon, and whose complexity advantages corporate developers with training budgets over independent contributors. Canonical’s Snap package system replaces the open Debian package ecosystem with a closed-source, Canonical-controlled store backend, silently substituting apt packages with Snap equivalents.
The corporate personnel network reinforces these dynamics. Key open-source infrastructure developers cycle through Microsoft, Google, and Red Hat in a pattern that aligns open-source project roadmaps with corporate interests. The trajectory of Lennart Poettering — creator of systemd at Red Hat, subsequently employed by Microsoft working on WSL and Azure systemd integration, and in 2026 co-founding Amutable with other ex-Microsoft Linux developers to build “verifiable integrity” and attestation tooling for Linux — is a microcosm. Amutable’s stated mission of cryptographic signing, reproducible builds, and runtime attestation is genuinely useful for security and is precisely the technical prerequisite for remote enforcement of approved system states. All three Amutable founders are former Microsoft employees. The revolving door between open-source infrastructure development and platform incumbents is not a conspiracy; it is an incentive structure that reliably produces outcomes serving corporate interests.
The unified pattern across all of these developments is consistent: each is framed as a security or quality improvement, each is implemented in a way that maximally disrupts existing ecosystems, and the transition costs are absorbed by corporations with resources while falling most heavily on individual and community developers who cannot. Whether this is coordinated is less important than the structural observation that the incentives all point in the same direction. The net effect is a Linux ecosystem that increasingly requires corporate infrastructure, corporate tooling, and corporate contribution pipelines to participate in meaningfully — precisely the enshittification dynamic Doctorow describes, where a platform is degraded for its original users while being optimized for the institutional stakeholders who captured it.
Application Control as the Central Battleground
The fight over application control is the strategic center of the conflict over general purpose computing, and understanding why requires following the logic of encryption.
End-to-end encryption has, by most practical measures, prevailed. The Signal protocol is now embedded in WhatsApp, iMessage uses strong encryption by default, and even casual users routinely send messages that no third party can read in transit. This represents a genuine and substantial victory for privacy, and governments have so far been unable to reverse it through direct technical or legal attacks on the cryptographic protocols themselves. The UK’s Online Safety Act, the EU’s proposed chat control regulation, and various Australian and American legislative efforts to mandate backdoors have all foundered on the technical reality that weakened encryption is broken encryption, and the political reality that mandating insecure communications is electorally toxic.
Governments that cannot break encryption in transit have shifted the interception point to the endpoint. If you cannot read the message on the wire, you control what software is allowed to create and read messages on the device. This is the strategic logic behind the push toward application control, and it reframes the entire “walled garden is for security” argument. The security being protected is partly the state’s surveillance access, not just the user’s device integrity.
App store certification is the primary mechanism. When a platform controls what applications may be installed, it controls what encryption clients users can access. A VPN app that doesn’t comply with local law simply doesn’t appear in the store — or gets removed, as Apple has repeatedly demonstrated in China by deleting VPN, news, and communication applications on government demand. A Signal fork with features a government dislikes can be blocked at the distribution layer without anyone touching the protocol. The app store becomes a licensing chokepoint for what software users may run, exactly as CALEA created a licensing chokepoint for what telecommunications equipment carriers may deploy. The logic is identical; only the layer of the stack has changed.
CALEA’s extension from network carriers to software distributors is the legal template. CALEA required telephone companies to build wiretap capability into their switching infrastructure as a condition of operating. The app store model applies equivalent logic to software distributors: the ability to install software on a device becomes contingent on the distributor’s compliance with government requirements, which can include lawful intercept capability, content restrictions, or identity verification. No legislation explicitly extending CALEA to app stores has been necessary, because the corporate incentive to maintain app store control is sufficient — governments need only ask, and the infrastructure to comply already exists.
Direct government compulsion is not hypothetical. Apple’s China App Store removals are the clearest demonstration: VPN applications, news applications, and communication tools have been deleted on demand. The mechanism is simple and effective — Apple controls the distribution channel, the Chinese government specifies what must be removed, and the applications vanish. Users who had previously installed them may find them disabled or unable to update. This has occurred repeatedly, is publicly documented, and establishes the precedent that app store gatekeeping is a government-accessible control mechanism wherever it exists.
Attestation infrastructure completes the architecture. When a device can cryptographically prove to a remote party that it is running only certified, unmodified software, the encryption underneath becomes irrelevant. If the endpoint is certified to be running a compliant messaging client — one that, say, includes a content-scanning module or retains message metadata — then it does not matter that the messages are encrypted in transit. The plaintext is accessible at the certified endpoint. This is precisely what Ross Anderson warned about in 2003 when he described Trusted Computing as a mechanism for remote parties to verify and enforce what software a user’s machine is running. TPM 2.0, Pluton, UEFI Secure Boot, and the emerging attestation ecosystem being built by projects like Amutable are the materialization of that warning. A device proving it runs only certified software renders the user’s choice of encryption protocol a footnote.
The regulatory apparatus is converging on this model from multiple directions. The Cyber Resilience Act’s mandatory update obligations ensure that vendor-controlled update channels exist. The UK Online Safety Act creates implicit pressure for content scanning at the endpoint. The EU’s proposed chat control regulation would mandate client-side scanning of encrypted messages. Each of these initiatives is individually debatable; collectively, they constitute a systematic effort to move the surveillance and control layer from the network — where encryption defeated it — to the device, where app store control and attestation infrastructure make it effective.
The privacy movement largely succeeded on encryption — and governments and incumbents responded by moving the control layer up the stack to distribution and attestation, effectively routing around the cryptographic victory. This is why application control has become the central battleground: it is the mechanism by which the gains in encryption are being nullified.
What Has Been Lost
The shift toward centralized control has already inflicted substantial damage on the computing ecosystem, and the losses extend beyond abstract principles to concrete capabilities that users once had and no longer do.
The Unix philosophy — the design principle that software should consist of small, composable, auditable tools that do one thing well and communicate through plaintext interfaces — has been systematically abandoned. systemd replaces a collection of small, independent, inspectable programs with a monolithic integrated platform whose components are tightly coupled and whose logging format is binary, requiring specialized tooling to read. GNOME and KDE have grown to millions of lines of code, well beyond individual comprehension. The modern Linux desktop stack involves dependency chains so deep and intertwined that removing any one component cascades into failures across seemingly unrelated subsystems. The result is software that no individual can fully understand, audit, or trust — a direct inversion of the principle that made Unix-derived systems trustworthy in the first place.
The right to modify software on owned hardware has been eroded both legally and technically. The DMCA criminalizes circumvention of technical protection measures, which courts have interpreted to cover everything from printer cartridge chips to tractor firmware. UEFI Secure Boot with vendor-controlled signing keys means that installing a custom operating system requires either obtaining the vendor’s cooperation or navigating a key enrollment process that manufacturers can make arbitrarily difficult. Apple’s iOS has never permitted sideloading on standard devices. The right-to-repair movement has achieved real legislative progress in several US states and the EU, but the fight is ongoing and the default position of major manufacturers remains hostile to user modification.
The right to install unapproved software is being normalized away through app store gatekeeping. iOS users cannot install applications outside Apple’s App Store without jailbreaking their devices — a process Apple actively works to prevent with each software update. Windows S Mode restricts installations to the Microsoft Store. Chrome OS limits users to web applications and approved Android apps. macOS notarization means Apple can revoke any application’s ability to run at any time. The cumulative effect is that permission-based computing — where running software requires approval from a platform vendor — is becoming the default experience for the majority of computing device users. A generation is growing up for whom the concept of downloading and running arbitrary software is unfamiliar.
The systemic fragility created by centralized update authority was demonstrated catastrophically by CrowdStrike in July 2024. A single content update deployed through CrowdStrike’s auto-updating Falcon sensor — running with kernel-level privileges on millions of Windows machines worldwide — caused a global outage affecting airlines, hospitals, financial institutions, and emergency services. The update was deployed automatically, without user consent or review, to all endpoints simultaneously. The vulnerability here is not specific to the particular update that failed but is a structural property of the architecture: any system where a single vendor can push code with kernel-level privileges to millions of machines simultaneously, without user review, is a single point of catastrophic failure. The CrowdStrike incident demonstrated what the centralized update architecture makes possible, and there is no reason to expect it will be the last such event.
The open-source alternative has been systematically subverted rather than competed with. This is a more efficient capture strategy than direct replacement. Rather than building a proprietary competitor to Linux and convincing users to switch, incumbents have placed their employees in key maintainer positions, introduced corporate-aligned complexity into core infrastructure, and created dependency chains that pull even reluctant distributions toward centralized, corporate-controlled components. systemd’s near-universal adoption across Linux distributions was not driven by user demand but by corporate distribution decisions at Red Hat, Canonical, and SUSE, followed by cascading dependencies that made resistance increasingly costly. The result is an “open source” ecosystem where the most critical infrastructure components are effectively governed by corporate roadmaps.
Countermeasures
Incremental Improvements Within Linux
Linux has irreplaceable hardware support, a vast driver ecosystem, and a user base that no alternative operating system can match. For many users, wholesale replacement is impractical. The goal, then, is informed selection of components that minimize corporate dependency chains, maximize auditability, and preserve user control. This is not a permanent solution — a cleaned-up Linux on hardware with Intel ME and AMD PSP is still running on compromised hardware — but it is the most immediately accessible step for technically capable users.
Replacing systemd with auditable alternatives restores the Unix philosophy to service management. runit, descended from DJ Bernstein’s daemontools, is the simplest supervision model available: a service is a directory with a run script, logs are plaintext files automatically rotated by svlogd, and the entire codebase is small enough to read in an afternoon. Void Linux uses runit as its default init and is the gold standard implementation. s6 and s6-rc, designed by Laurent Bercot, provide more rigorous dependency handling for deployments requiring complex service dependency graphs. OpenRC is the most widely deployed alternative, used by Gentoo, Alpine, and Artix, with a large existing service library. The key is that each of these is individually comprehensible and auditable in a way that systemd is not, and none create the cascading dependency chains that make systemd effectively irremovable.
Avoiding Rust kernel code preserves the kernel’s accessibility to independent contributors. Rust does not represent a significant advance over prior languages and tooling for the problems it claims to address in the kernel context: C with MISRA-inspired guidelines, clang static analysis, and AddressSanitizer addresses the same classes of memory safety bugs through well-established techniques, and for applications where formal guarantees are genuinely needed, Ada/SPARK has decades of certified deployment in the industries with the most stringent safety requirements. The push for Rust’s adoption in the kernel was driven by a small, well-resourced group over significant objection from senior maintainers, and the Rust Foundation’s governance by Microsoft, Google, and Amazon means language evolution is outside community control. In the kernel context specifically, language choices are more constrained, but C by itself is fine, exception-free C++ is a reasonable alternative if additional type safety is desired, and D’s C-compatible subset might also serve. For application and embedded programming, where Rust’s advocates often claim the broadest mandate, the field of mature alternatives is considerably wider: Go (a descendant of Aleph and Limbo, with strong concurrency and memory safety), D, Lua, Ada/SPARK, safe C subsets, Oberon, C++ with style restrictions, C#, and OCaml, among others. None of these is a drop-in replacement for Rust, but Rust is itself a limited-domain language, and the practical pattern in systems work has always been to pair a low-level language with a safe one — C and Go, D and its safe subset, unsafe and safe C++ — rather than to claim that a single language solves all problems. Introducing a corporate-governed toolchain dependency into the kernel build system, and raising the contribution barrier in ways that disproportionately affect independent developers, are costs that should be weighed against whatever incremental safety benefits Rust may offer — and the case that those benefits justify the costs, given that C with modern tooling is adequate for the kernel and that a wealth of alternatives exists for higher-level work, has not been convincingly made.
Retaining X11 and treating Wayland adoption as a user choice rather than a forced migration preserves decades of working infrastructure. Screen capture, remote desktop, accessibility tools, and tiling window manager ecosystems all function on X11. The security isolation that Wayland claims as its justification could have been implemented as X11 extensions without breaking backward compatibility — the clean break was a corporate scheduling decision, not a technical necessity. Where Wayland is adopted deliberately, Sway is the most auditable compositor: minimal C codebase, tiling model, no GNOME or KDE dependency. Distributions that maintain genuine X11 support — Void Linux, Gentoo, Devuan, Alpine — should be preferred over those treating X11 as deprecated.
Replacing GNOME and KDE with minimal, auditable desktops eliminates the primary vectors for systemd’s desktop dependency chains. The suckless project’s dwm is an entire window manager in under 2,000 lines of C, modified via source patches rather than configuration files — the closest existing Linux-ecosystem equivalent to Niklaus Wirth’s design philosophy. i3 provides a well-documented tiling window manager with no systemd dependency and a large user community. XFCE is the lightest traditional desktop with less systemd entanglement. The D-Bus dependency that connects systemd, GNOME, KDE, and NetworkManager should be minimized wherever possible; in the long term, 9P-based IPC is the architecturally correct replacement.
Careful container and package selection avoids the distribution-capture dynamics of centralized package systems. Snap should be avoided entirely — its closed-source store backend makes it the canonical example of corporate distribution capture layered on open-source software. Flatpak with user-controlled remotes is acceptable; its bubblewrap sandboxing layer is auditable. AppImage, which is self-contained with no daemon and no central store, is most consistent with user autonomy. Nix and GNU Guix provide the most technically sound package model: hash-addressed derivations, reproducible builds, and no global mutable state. BSD — FreeBSD and OpenBSD in particular — provides a clean alternative path where hardware support permits, having categorically never adopted systemd, Wayland, or Rust in their kernels, and having produced substantial security infrastructure including OpenSSH and LibreSSL.
New General Purpose Computing Platforms
Incremental Linux improvements address symptoms rather than causes. The architectural problems — global namespaces, ambient authority, monolithic kernels with millions of lines of trusted code — require architectural solutions. Several existing operating systems and design traditions offer fundamentally sounder models, and the most realistic deployment strategy is not wholesale replacement of mainstream computing but the enclave model: a trusted OS running on minimal hardware, coexisting with hostile mainstream infrastructure and treating commercial platforms as untrusted carriers.
Plan 9 and its successor 9front represent the most architecturally coherent alternative to the Linux/container ecosystem. Plan 9, developed at Bell Labs as the successor to Unix, takes the “everything is a file” principle more seriously than Unix ever did: network connections, processes, graphics, and authentication are all file trees mounted in per-process namespaces. This single design decision — that each process constructs its own view of the filesystem — makes containerization a first-class kernel primitive rather than a bolted-on afterthought. Linux containers require cgroups, namespaces, seccomp, overlay filesystems, a container runtime, and an orchestrator, totaling millions of lines of complex, security-critical code. A Plan 9 “container” is simply a process with a different namespace. The 9P protocol serves as a universal IPC mechanism, replacing the proliferation of Linux IPC mechanisms — pipes, sockets, D-Bus, netlink, io_uring — with a single clean, capability-based protocol. The factotum authentication agent replaces setuid and PAM with explicit capability grants, eliminating ambient authority. 9front is the actively maintained fork with solid x86_64 support. For migration, Plan 9 from User Space (plan9port) ports Plan 9 userland to run on Linux, and FUSE-based 9P environments provide an immediate practical approximation on stock Linux.
The BSD family demonstrates that Unix-derived systems can maintain architectural integrity under corporate pressure. FreeBSD’s Capsicum capability framework provides the closest thing to Plan 9’s capability model in a mainstream operating system. OpenBSD’s pledge and unveil system calls offer practical capability restrictions that any program can adopt incrementally, and OpenBSD’s security-first development culture has produced OpenSSH and LibreSSL — critical infrastructure used across the entire internet, developed by a small team with an explicit commitment to code auditing and minimal complexity. Neither FreeBSD nor OpenBSD has adopted systemd, Wayland, or Rust in their kernels, and there is no institutional pressure to do so.
Oberon-based systems embody Niklaus Wirth’s vision of individually comprehensible computing. The Oberon System is a complete operating system — compiler, editor, document system, networking — written in Oberon, a language whose entire specification fits on a few pages. The compiler is a few thousand lines of clean, readable code. Dynamic arrays with automatic bounds checking eliminate buffer overflows by construction, without a borrow checker or complex ownership system. A2/Bluebottle, written in Active Oberon with built-in concurrency primitives, demonstrates that a modern, multicore, networked operating system with a graphical interface does not require millions of lines of code. Oberon’s adoption has been limited by its Pascal-derived syntax; a C-syntax isomorphic front-end — a purely mechanical, reversible transformation preserving full semantics — would remove the adoption barrier without any semantic compromise. A concatenative (Forth-like) front-end would provide a third isomorphic syntax optimized for compact distribution and QR code encoding. The three-syntax system would compile to identical abstract syntax trees, sharing a single toolchain and runtime.
The enclave model is the realistic deployment strategy for all of these. The goal is not to replace Windows or macOS for everyone but to run a trusted operating system on minimal, auditable hardware for specific sensitive functions — key management, secure communications, document handling — while treating mainstream commercial infrastructure as hostile. The approach is analogous to how intelligence agencies treat their own computing: hostile network assumptions, encryption at every boundary, and trusted enclaves for operations that matter. A Plan 9 instance on a small x86 box, or an Oberon system on a Raspberry Pi, serving as a secure terminal alongside a mainstream laptop used for untrusted functions, is a practical architecture available today.
AI inference is a particular challenge because competitive performance currently requires NVIDIA hardware and the proprietary CUDA software stack. The pragmatic approach, consistent with the enclave model, is to treat AI inference as an untrusted external service: sensitive data never reaches the inference node in plaintext, outputs are validated as untrusted, and the inference hardware is architecturally equivalent to any other hostile network service. llama.cpp on AMD hardware with the open-source ROCm stack, running on a cleaned-up Linux or FreeBSD installation, is the most autonomous current option. Tenstorrent, with its RISC-V-based compute tiles and open-source software stack, represents the most promising path toward genuinely open AI accelerator hardware.
Embedded Systems and Enclaves
Embedded systems represent the most immediately practical countermeasure tier because they exist in a regulatory and architectural gap that the centralized control apparatus has not yet closed.
The regulatory gap is substantial. ESP32, ESP8266, STM32, and similar microcontrollers are programmed directly over USB or JTAG, bypassing operating systems, app stores, and attestation layers entirely. The ecosystem is vast: hundreds of millions of deployed units running custom firmware, mostly using open toolchains — Arduino, MicroPython, ESP-IDF — that are not subject to any certification regime. No TPM, no Secure Boot, no signing requirement exists on these devices, because the use cases that drove their development — sensors, actuators, industrial control — require direct hardware access that certification would impede. The EU Cyber Resilience Act attempts to reach these devices through its broad “products with digital elements” language, but enforcement against firmware flashed by end users is practically impossible. The hobbyist and maker culture serves as a preservation layer: the skills and toolchains for direct hardware programming remain widely distributed and are not dependent on any platform incumbent.
Dedicated single-function enclave devices implement the strategic insight that reframes the entire conflict: stop trying to secure the insecure. Commercial hardware and software — Windows, macOS, Android, iOS, cloud services — are hostile infrastructure. Treat them exactly as an intelligence agency treats a compromised network: route around them and encrypt everything passing through them. A secure terminal with a minimal keyboard and display handles only plaintext input and output, encrypting before passing data to an untrusted host — the host sees only ciphertext. A secure word processor handles text editing and document storage, with everything encrypted at rest; the untrusted system sees only encrypted blobs it cannot read. A key management enclave stores and uses cryptographic keys without ever exposing private key material to any untrusted system. A secure communications enclave handles encrypted protocols, presenting only encrypted packets to untrusted network infrastructure. Each enclave device is physically minimal, functionally specific, air-gapped or minimally connected, bootstrapped from auditable source, and cheap enough to be disposable if compromised. The architecture is not novel — hardware security modules in banking, Ledger and Trezor cryptocurrency wallets, and military secure telephones have all independently converged on essentially this model, which suggests it reflects something fundamental about what effective security requires.
The untrusted carrier model completes the strategic reframing. Commercial hardware and software become dumb encrypted pipes — functionally equivalent to hostile network nodes. This is liberating: you stop caring about Windows updates, Apple’s app store policies, Google’s surveillance, and government backdoors in commercial operating systems. They can have all the access they want to the carrier layer — they see only ciphertext. The entire regulatory and corporate control apparatus identified throughout this essay — app store certification, attestation infrastructure, mandatory update obligations, the Brussels Effect — has no purchase on encrypted data passing through infrastructure the adversary controls. The adversary’s control of the carrier layer provides no useful capability against properly implemented enclave cryptography. You are not fighting the control architecture; you are rendering it irrelevant.
Defenses Against Hardware Supply Chain Restrictions and Subversions
Software-only solutions are ultimately defeated by hardware-level attestation and management engines. If the silicon itself answers to someone other than the user, no amount of operating system hardening is sufficient. Genuine computing autonomy requires hardware control, and the notable development of the past decade is that this has become increasingly accessible to individuals and small teams.
FPGAs as a trusted computing base offer the most immediately practical path to auditable hardware. A Field Programmable Gate Array implements logic below the software layer entirely — there is no operating system, no firmware in the conventional sense, just a bitstream configuring hardware gates. A sufficiently capable FPGA can implement any digital circuit, including complete processors, making software-layer attestation architectures irrelevant. Critically, fully open toolchains now exist: Yosys for synthesis, nextpnr for place-and-route, and Project IceStorm and Project Trellis for bitstream generation targeting Lattice iCE40 and ECP5 devices. The entire path from hardware description language source to running silicon is auditable without any proprietary tools. Lattice iCE40 devices capable of running a soft RISC-V core cost under five dollars in quantity, and development boards are available for twenty-five to fifty dollars.
RISC-V, the open instruction set architecture, eliminates the licensing constraints and proprietary implementations that characterize x86 and ARM. RISC-V implementations are fully auditable from source. PicoRV32 provides a minimal, well-audited core that fits on the smallest modern FPGAs. VexRiscv offers a more capable, configurable pipeline for applications requiring more performance. SERV, the smallest known RISC-V implementation at roughly 200 lookup tables, demonstrates that a complete processor can be simple enough to verify by inspection. None of these implementations contain a management engine, a platform security processor, or attestation hardware. They execute exactly the instructions they are given and nothing else.
Open-source Forth processors on FPGAs deserve particular mention. The J1, designed by James Bowman, is a minimal 16-bit stack CPU implemented in roughly 200 lines of Verilog, achieving around 100 MIPS on modest FPGA hardware. SwapForth, also by Bowman, builds a complete interactive Forth system on the J1, providing an entire development environment running on the FPGA itself. The J1 has been ported to iCE40 devices and works with the fully open Yosys/IceStorm toolchain, meaning a complete Forth computer — processor, language, and development environment — can be built and audited end to end without any proprietary tools. For the enclave applications discussed elsewhere in this essay, a J1 running SwapForth on an iCE40 is a self-contained, auditable computing system available today for well under fifty dollars in hardware.
Small-scale IC fabrication outside the commercial semiconductor supply chain is an emerging possibility worth noting, though not yet a proven capability for general use. UV galvo laser lithography using commodity laser diodes and galvanometer mirrors appears capable of achieving feature sizes in the range of 1.5 microns with careful optics — roughly equivalent to early 1980s commercial semiconductor processes. If this proves out at scale, it would place simple processors within reach of small teams: a RISC-V RV32I implementation requires roughly 10,000 to 20,000 gates, and a Forth processor following Chuck Moore’s F18A architecture requires fewer still. The fabrication toolchain is already open source — Magic VLSI for layout, KLayout for GDS editing, OpenROAD for the RTL-to-GDS flow, Yosys for synthesis — and the process knowledge for older fabrication techniques like diffusion doping is well documented in the public domain. How soon this becomes practical for producing functional processors remains an open question, but the trajectory is encouraging. Commercial efforts are already lowering the barriers to chip fabrication from both ends. TinyTapeout, created by Matt Venn, allows individuals to fabricate small digital circuits for a few hundred dollars by sharing die space on a single chip, using the Efabless platform and SkyWater’s open 130nm process. The Efabless Open MPW shuttle program, sponsored by Google, has provided free fabrication runs for open-source chip designs. Open process design kits from SkyWater (130nm) and GlobalFoundries (180nm) make real foundry processes available for design without licensing fees. And Carnegie Mellon’s Hacker Fab project is working to build open-source, low-cost photolithography and fabrication setups intended to make chip fabrication accessible at the university-lab level rather than requiring a full commercial cleanroom. Taken together, these developments suggest that small-scale IC manufacturing — whether through shared commercial fabrication or increasingly accessible local equipment — may become a practical option in the near term, which would close the last remaining gap in the hardware trust chain.
Source code as legally protected speech provides the ultimate distribution resilience for the entire stack. Bernstein v. United States and Junger v. Daley established that source code is protected under the First Amendment. A program small enough to be printed on a page or encoded in a QR code is maximally protected speech — prior restraint on a printed page is essentially impossible under US law. The cypherpunk movement explored this directly when PGP source code was printed in books specifically to invoke First Amendment protection. Compact languages are political infrastructure in this context: a complete Forth implementation fits in kilobytes, sometimes hundreds of bytes. The entire Forth source plus application can be human-readable, auditable, and printable in a form a competent programmer can verify by inspection. The Trusting Trust problem — Ken Thompson’s 1984 demonstration that a compiler can be compromised to insert backdoors surviving recompilation — is addressed by bootstrappable builds projects that provide a minimal binary seed, sometimes just a few hundred bytes of hand-auditable machine code, from which an entire toolchain can be bootstrapped. A Forth system is maximally resistant to this attack: the minimal interpreter is small enough to write and verify by hand, and the entire bootstrap chain is individually inspectable. The complete trust chain from physical first principles to running application is printable in a modest volume and distributable through channels that no regulatory framework can reach.
Conclusions
The preceding sections have documented a control architecture that operates at every layer — from regulatory frameworks through silicon to application distribution — and a set of countermeasures that, while demanding, are technically feasible today. What remains is to draw out the political implications, because the technical picture, however detailed, understates what is at stake.
The asymmetric burden described in the introduction is worth revisiting in light of the specific mechanisms documented in the body of this essay. App store certification, attestation mandates, mandatory update obligations, and compliance frameworks form a system whose costs fall almost entirely on legitimate actors — ordinary users, small businesses, independent developers, and political dissidents — while the sophisticated adversaries these measures are ostensibly designed to stop have the resources and incentive to route around them. The countermeasures sections of this essay demonstrate, in concrete technical detail, that the control architecture can in fact be circumvented by anyone sufficiently motivated. The people who lack that motivation or capability are not criminals; they are the general public. A regime that constrains the general public while leaving its stated targets largely unaffected is not serving a security function. It is serving the interests of the incumbents and authorities who benefit from reduced competition and reduced unsupervised communication.
This dynamic is considerably more pronounced in Europe, where the political culture defaults more readily to institutional control over individual autonomy, and where the regulatory process has been more thoroughly captured by incumbent corporate interests than in the United States. The EU regulatory apparatus — the CRA, GDPR, DSA, DMA, and their successors — whatever its intentions, operates in practice as a system in which large incumbents help write compliance frameworks that smaller competitors cannot meet. A full account of the structural differences between American and European governance is beyond the scope of this essay, but the consequences for computing freedom are direct: the EU’s regulatory architecture reflects significantly different assumptions about the balance between individual liberty and state authority than the American tradition, and the attempt to globalize that architecture through the Brussels Effect is a source of genuine and growing friction between the two systems.
These differences are a primary driver of the US-EU technology conflict, and a primary reason the United States should consider decoupling. If European governments and their citizens choose to accept these tradeoffs for themselves — exchanging computing freedom for regulatory control, innovation for compliance, and individual autonomy for institutional oversight — that is their sovereign right. It is not acceptable for those regulatory choices to be imposed on Americans through the Brussels Effect, extraterritorial enforcement, or the economic leverage of single market access. The US should resist the de facto adoption of EU technology regulation, decline to treat EU frameworks as having extraterritorial force over American companies and users, and be prepared to impose reciprocal trade consequences if Europe persists in attempting to export its regulatory architecture. The Cyber Resilience Act, GDPR, and their successors are European laws for European jurisdictions; their globalization through market leverage, without any corresponding American legislative process, is a problem that demands a political response.
Domestically, regulatory capture by large technology incumbents requires continuous resistance. Antitrust enforcement must address not only market dominance but the specific mechanism by which incumbents co-author compliance frameworks that foreclose competition. Legislative vigilance is needed against certification and update mandates that, whatever their stated purpose, function as barriers to entry. And the right to compute — to run arbitrary software on hardware you own, to modify that software, to understand what it does, and to share your modifications with others — should be recognized as a fundamental civil liberty, in the same political tradition that produced the First and Fourth Amendments. The right to compute is the right to think with tools; restricting it is restricting thought itself.
The strategic technical reframing is to stop fighting the control architecture frontally and build around it. The cypherpunk insight has always been that encryption and autonomy do not require permission; they require engineering. Every component of the autonomous stack described in this essay — from runit replacing systemd, to Plan 9’s per-process namespaces replacing containers, to FPGA implementations of RISC-V processors replacing Intel’s management-engine-compromised silicon, to homemade VLSI closing the last hardware trust gap, to Forth32 instances communicating through encrypted channels over untrusted carriers — is buildable today with accessible tools and modest resources. This is not a theoretical future but an engineering present.
The emerging possibility of small-scale IC fabrication, the embedded enclave ecosystem, and the open FPGA toolchain together suggest the potential for a second hobbyist hardware era — echoing the Altair, the Apple I, and the Homebrew Computer Club, but with the difference that today’s participants understand the political stakes explicitly. They are building not merely interesting technology but infrastructure for liberty.
The defense against attacks on general purpose computing is not resistance alone but construction: building systems that render the attack surface irrelevant, while fighting politically to preserve the legal and economic space for those systems to exist. The window is open, but it is closing.