Right to repair and John Deere | Composable and distributed systems group
Mon, 2026-02-09
Sharing our experimental call summaries.
Al-generated digests of Yak Collective study groups.
Key resources discussed
Article links:
1) https://howardyu.substack.com/p/did-john-deere-build-the-future-or?
2) https://publicinterestnetwork.org/wp-content/uploads/2025/01/State-of-Right-to-Repair_USPEF_Jan_.2025.pdf
Topic & Prompt: Right to Repair, Interoperability, and “Walk‑Away” Tech
The session centered on “right to repair” prompted by readings on John Deere and the state of right-to-repair legislation. The group used the prompt as a springboard to examine different “regimes” of hardware/software lock‑in, where the real tradeoffs are (physics, economics, liability), and how this connects to broader issues like data portability, game shutdowns, and military/aerospace systems.
Two Distinct “Right to Repair” Regimes
1. Large, standardized, mix-and-match devices (tractors, cars, printers, Keurig, etc.)
These systems are physically big enough and modular enough that components can, in principle, be mixed and matched.
Interfaces are often standardized; third‑party parts are technically feasible.
Classic example: printer ink cartridges.
There’s no reason that you can’t have third party ink cartridges, there’s no reason you can’t refill ink cartridges. There’s just a whole bunch of software controls on it to make that difficult because it is lucrative.
Other examples invoked: John Deere tractors, F‑35, many modern cars, consumer printers, Keurig coffee pods.
Key concern in this regime:
Digital controls are used to prevent otherwise physically feasible repairs or third‑party parts. This is seen as rent‑seeking:
you literally cannot do this thing until somebody on the other end pushes a button when it would work except for that they haven’t pushed the button.
2. Highly integrated, compact devices (phones, laptops, smartwatches)
Here the design constraints are driven by thinness, weight, and cost.
This leads to bespoke components, glue instead of fasteners, 3D‑printed or machined chassis (e.g., Apple’s titanium work), and tightly integrated assemblies that are hard to disassemble, repair, or recycle.
Examples: iPads and iPhones, Fairphone, Framework laptops, smartwatches.
One participant described an earlier conversation on iPads: substantial performance and affordability gains depend on highly integrated construction;
you couldn’t have all random middle‑class humans having these powerful pad‑sized computers if everything had to remain easily repairable and modular.
Important distinction:
In this regime, non‑repairability is often a byproduct of engineering and cost constraints rather than pure rent‑seeking. There was skepticism that a full “right to repair” win here is even desirable to most consumers, given tradeoffs in size, performance, and price.
The group agreed it’s useful to keep these two regimes mentally separate when reading or advocating around right to repair.
Safety, Liability, and Regulatory Capture
A major thread was safety and liability concerns that are often invoked to justify restricting repair.
Legitimate concerns raised:
Modifying cars can:
Bypass speed caps (e.g., proposals in California to enforce speed limits electronically).
Interfere with airbag or seatbelt sensors.
Break emissions and mileage tuning, leaving vehicles non‑compliant with regulations.
Question: if a user or third‑party modifies a system and it becomes non‑compliant or unsafe, who is liable?
The equipment manufacturer?
The car manufacturer?
Or the user/third‑party who did the modification?
One strong view: if you go outside official channels and modify the system, you “own” that modification and associated risk; manufacturers should have some shielding from liability in those cases. At the same time, the group acknowledged that how liability is structured can itself become a barrier to repair if companies overstate risk to keep control.
Counterpoint on “warranty” defenses:
Another participant pushed back that the
we have to lock this down to protect ourselves from warranty/legal issues
line is weak, because these same companies already use broad disclaimers to limit warranty and liability. Using remote locks and DRM primarily to avoid hypothetical liability doesn’t “hold water” and courts should treat this skeptically.
China EV door‑handle example:
China recently banned EV designs where recessed, software‑controlled door handles are not mechanically openable if the electronics fail.
Motivation: crash scenarios where rescuers could not open doors because the software/actuators failed.
This was not originally about right to repair, but demonstrates a hard safety boundary: design choices that trap users (literally, in cars) can trigger regulation that overrides aesthetics and “smart” behavior.
Since China dominates EV manufacturing, such safety-driven constraints can ripple globally.
This example illustrated that there are natural (and sometimes sudden) limits to “brickability” when public safety is directly and visibly impacted.
Beyond Tractors: Games, Remote Bricking, and “Stop Killing Games”
The discussion broadened to digital goods that depend on remote servers.
Stop Killing Games movement:
Cited as an important parallel by a participant who follows Ross Scott’s work.
Historical pattern:
Older games shipped as binaries; even if multiplayer required a server, players could often run their own servers.
Modern games often require centralized publisher servers for single‑player functionality, DRM, or core logic.
When servers are shut down:
It’s not just that multiplayer dies – the entire game can cease to function.
Consumers are left with a “game you paid for” that stops working, with no fallback.
Key distinction from John Deere:
John Deere is often portrayed as actively malicious, shaping architecture to enforce lock‑in and extract more revenue.
Many game shutdowns appear driven more by laziness or lack of planning than malice:
Teams design without an end‑of‑life plan.
Once the revenue curve tails off, they shut down servers without providing an offline mode or self‑hosted option.
Ross Scott’s research suggests: designing with an end‑of‑life plan is not particularly expensive if considered up front.
The Stop Killing Games push is framed less as “change the law to make this illegal” from scratch and more as
get courts to recognize this as already illegal under consumer law
(i.e., you can’t unilaterally destroy functionality of a product sold as a durable good).
Remote bricking as a red line:
Multiple people were uncomfortable with any regime where a vendor can unilaterally brick your device from afar—whether a tractor, car, game, or other consumer electronics. A strong normative stance was:
any time somebody can brick your stuff from afar, that just needs to be illegal.
Personal Histories and the “Destructive Curiosity” Factor
Several participants connected the topic to their own backgrounds:
One recalled bricking toys as a child by opening them up and failing to reassemble them, contrasted with a cousin whose toys remained in working condition. This established a personal sense that:
Curiosity and experimentation are valuable.
But “right to open” does not magically confer the skill to repair or reassemble correctly.
Another story: an early refurbished iPhone (circa 2008) in India that had to be jailbroken to work.
Jailbreaking enabled functionality and app access but led to overheating and instability.
This literal “burning” experience colored their sense of the practical limits of tinkering.
These anecdotes reinforced a theme: access and capability are different. Even if legal and technical barriers are removed, complex systems will remain the domain of a specialist minority, and failed amateur interventions can create new failure modes and support burdens.
Professional Experience: Enterprise Simulation and Knob Exposure
One participant compared right to repair to API and configuration exposure in complex CAE/simulation software:
In simulation tools, there are many “numerical knobs” and solver controls that can be:
Hidden behind sane defaults.
Or exposed to power users.
Experience: every time more of these internal knobs were exposed to customers, many misused them, leading to:
Bad results.
Increased support load.
Time spent “undoing” customer misconfigurations.
Analogy to right to repair:
Complex hardware systems with full internal access are similar: if you give everyone full low‑level control, many will misconfigure things and then require expensive remediation.
A well‑designed, opinionated, vertically integrated system can be a better market outcome, especially when you price in:
Support costs.
Time engineers spend debugging user misconfigurations.
This echoes the earlier view that
tightly integrated + guarded interfaces
can be more efficient for high‑complexity systems, even if it constrains repair freedoms.
The same speaker noted that in their domain, data interoperability—common file formats, neutral outputs that other tools can consume—is a workable “middle ground” when full internal tinkering isn’t viable. That line of thought reappears in the John Deere discussion and broader interoperability themes.
Interoperability, Data Portability, and Heterogeneous Big Tech
The conversation returned multiple times to interoperability and data portability as adjacent but distinct issues from right to repair.
Key distinctions:
Right to repair:
Primarily about atoms, with bits used to control or restrict what can be done with physical hardware.Interoperability & data portability:
About the flow of bits between systems, services, and ecosystems.
One participant argued the John Deere article conflated these topics in a way that obscures their differences, even though they’re correlated:
things that help with one tend to make the others easier, but they aren’t the same issue.
Big tech comparison:
The group contrasted major tech firms on interoperability/portability:
Apple and Microsoft:
Characterized as having
really bad interoperability, data portability.
Apple singled out as “the worst” in subtly locking things down.
Google:
Described as “oddly this kind of weird hero.”
Google Takeout provides significantly more data export capability than most competitors.
Broad API exposure across products.
Amazon:
S3‑compatible APIs have become quasi‑standards; many clones exist.
Data can be migrated in and out, although egress costs remain non‑trivial.
Observation: public narratives that lump Google, Apple, Microsoft, and Amazon together as a single “big tech lock‑in” block ignore meaningful heterogeneity in how much actual interoperability and portability they support.
One participant aligned with Tim Wu’s stance: it’s better to legislate for interoperability and data portability rather than directly attacking market share or company size.
Scoring Lock‑In: A Heuristic Architecture Partitioning
To reason about “good faith” vs “bad faith” integration, a participant described a heuristic scoring exercise they had done (via ChatGPT) across various products, rating them from 1 (minimal/justified lock‑in) to 10 (maximal/unnecessary lock‑in).
Approximate scores mentioned:
USBC chargers: 1/10 (highly interoperable)
Framework laptop: 1/10 (explicitly repairable/modular)
Mechanical watches: 3/10
Regular cars: 3/10
K‑Cup machines: 4/10
Garmin sport watches: 4/10
iPhone ecosystem: 7/10
Amazon Kindle ecosystem: 7/10 (though recently added EPUB support)
Tesla: 7/10
Nespresso: 7/10
F‑35 standard variant: 8/10
Gillette razor blades: 8/10
Inkjet printers: 9/10
John Deere tractors: 9/10
Tonal home gym (hardware + SaaS): 9/10
Whoop fitness wearable subscription: 9/10
Nespresso vs K‑Cup illustration:
Nespresso:
Uses proprietary capsules with barcodes encoding coffee type and brew parameters.
Delivers noticeably better coffee quality than K‑Cup.
Can be partially “hacked” by reusing capsules with replacement foil lids, but barcodes still constrain behavior.
Seen as roughly 50/50 between legitimate technical integration and business capture.
K‑Cup:
More generic form factor; patents expired.
Capsules widely produced by third parties.
Less sophisticated integration of capsule → brew parameters.
The scoring scheme was not presented as rigorous but as a way to partition architectures:
Some dependency chains are inherently physics/engineering/economics‑driven.
Others involve clear intentional lock‑in where more open designs would be technically and economically plausible.
The group also discussed “walk‑away” characteristics: designing systems such that if the vendor walks away (stops support, shuts down servers), the system continues to function in a defined, safe way. Vitalik Buterin’s work was mentioned as gesturing toward technical mechanisms (including blockchain) for building such walk‑away properties into systems.
Geopolitics, Defense, and Access to Technology
The conversation briefly touched on military and national examples:
Aircraft procurement (Rafale vs F‑35):
India has evaluated French Rafale aircraft partly on the basis of greater technical access compared to the F‑35.
F‑35 is perceived as having an
unnecessarily locked‑in crappy software supply chain
that reduces autonomy for purchasing countries.
Historical Soviet vs Western equipment choices:
In the 60s–70s, India favored Soviet military equipment in part because it provided more access to underlying technology and repair/control capabilities.
Aerospace analogy for right to repair:
Rolls‑Royce jet engines on commercial aircraft:
Extreme complexity.
High safety stakes.
Strong argument for restricting who can touch them to highly qualified technicians within a tightly regulated environment.
“Right to repair” here must be contextual, granting repair rights to qualified actors (airlines, certified MROs, militaries), not necessarily to any random end user.
This reinforced the earlier theme: some domains (military, aerospace, safety‑critical infrastructure) justifiably require strong controls and certification, but even there customers seek meaningful control and local repair capacity, not total vendor dependency.
Market Segmentation and Target Demographics
One participant speculated that right‑to‑repair strategies are strongly shaped by target demographics:
Farmers (e.g., John Deere customers):
Historically resourceful and hands‑on.
Expect to maintain and repair their own equipment.
For this audience, aggressive DRM and closed diagnostics are arguably a bad business decision, as evidenced by rising backlash and litigation.
Designers / creative professionals on desktop Macs:
Value stability and minimal friction.
Generally do not want to tinker with internals; they care about file portability and tool reliability.
National defense buyers:
Will pay premiums for openness, diagnostic access, and local sustainment rights (e.g., Rafale vs F‑35).
The group noted that the John Deere case shows how overreaching lock‑in can backfire: instead of building a healthy third‑party ecosystem, the company now spends significant energy dealing with legal challenges and negative publicity.
Political vs Technical Dimensions
Several comments underscored that this is as much a political and corporate strategy issue as a technical one:
One participant with long history on the topic emphasized:
They had seen
third world farmers, developing country farmer, out sitting in his field with a bricked tractor
twenty years ago; this is not new.
At bottom, apart from some genuine safety and engineering questions,
mostly it’s a political issue and mostly it’s a corporate play.
It remains
a really good fight worth fighting for a while.
Another participant noted:
Arguments about engineering necessity can be sincere in some regimes (thin, high‑performance devices).
But there is also clear evidence of regulatory capture and deliberate rent‑seeking in others (e.g., John Deere, inkjet DRM).
Advocates and readers should be careful to parse which bucket a particular practice belongs to, rather than letting rhetoric blur categories.
Convergences, Divergences, and Open Questions
Convergences:
There are at least two distinct right‑to‑repair regimes:
Big, modular, standardized hardware abused by software locks.
Genuinely hard‑to‑repair integrated devices driven by physics and economics.
Remote bricking and unilateral vendor kill switches are broadly seen as unacceptable, especially when they destroy functionality of paid‑for products.
Interoperability and data portability are crucial adjacent issues; in some cases, they may be more tractable levers than direct right‑to‑repair mandates.
Some domains (aerospace, high‑safety systems) legitimately require restricted repair rights, but those rights should be granted to appropriately qualified actors, not monopolized solely by OEMs.
There is meaningful heterogeneity among big tech companies in terms of lock‑in and portability; they shouldn’t be treated as a monolith.
Divergences / Uncertainties:
How much of non‑repairability in integrated devices is “physics and manufacturing reality” vs preventable design choice?
Where exactly should liability fall when users or third‑party shops modify safety‑critical systems?
How practical is it to legislate “walk‑away” properties or end‑of‑life plans for digital services and connected devices?
To what extent will markets self‑correct (as with China’s EV safety rule) vs require more general regulatory intervention?
Open questions explicitly or implicitly surfaced:
Can we formulate a principled, cross‑domain test for when lock‑in is “justified integration” vs “unnecessary capture”?
What would robust, enforceable walk‑away guarantees look like for:
Connected vehicles?
Cloud‑backed consumer hardware?
Online games and SaaS?
How far should right‑to‑repair advocates push in the highly integrated regime (phones, tablets, wearables), given consumer preferences and engineering realities?
Could focusing policy more on interoperability and data portability (à la Tim Wu) indirectly improve right‑to‑repair outcomes by weakening vertical moats?
Wrap‑Up
Key takeaways:
Right to repair isn’t one problem but at least two: preventing software‑enforced lock‑in in otherwise repairable hardware, and dealing with inherent non‑repairability of tightly integrated devices.
Safety, liability, and regulatory capture are tightly intertwined; some manufacturer concerns are legitimate, others are pretexts for rent‑seeking.
Remote bricking and server‑dependent products that die at end of life (e.g., modern games) are flashpoints where consumer expectations and current practices sharply diverge.
Interoperability and data portability often provide a cleaner policy handle than directly legislating company size or banning specific business models.
Different user demographics (farmers vs designers vs militaries) reasonably demand different repair and control rights, and over‑locking systems for “hands‑on” users can be self‑defeating.
Open questions:
How to formalize “walk‑away” properties and require them in key domains?
Where should the legal boundary sit between user freedom to modify and manufacturer liability for downstream harms?
What set of metrics (like the informal 1–10 lock‑in score) could help policymakers and engineers evaluate systems in a consistent way?
Yak Collective Discord call thread:
https://discord.com/channels/692111190851059762/1470261901391822930


