July 23, 2008

Trusting Functionality

One of the major challenges we face with the design of our new linguistic command-line project is that of trust. As Zittrain mentions in The Future of the Internet, this is really the fundamental problem of generative systems, and also their most valuable asset: the ability for a user to run arbitrary code is simultaneously what gives the personal computer its revolutionary power, but it’s also its greatest vulnerability.

At present, because our project is still in the prototyping stage, we’re opting for freedom of expressiveness and experimentation over security. That means that all the various verbs we write, while written in JavaScript, are always executed with Chrome privieges, meaning that they’re capable of doing whatever they want to the end-user’s computer.

So the particular dilemma that needs to be solved here is: how can an end-user trust that a verb won’t do anything harmful to their data or privacy—be it intentional or accidental—while still providing a low barrier of entry for aspiring authors to write and distribute their own verbs?

We’ve considered some technical options so far. One is the idea of “security manifests” that come with verbs, specifying what a verb is capable of doing. For example, the “email” verb mentioned in my last post could specify in a manifest that it needed access to the user’s email service and their current selection. This information could then be presented to a user when they choose to install a verb. At an implementation level, the code could run in a specially-constructed sandbox to ensure that the verb code never steps outside the bounds prescribed by its manifest. Alternatively, or in addition to this, an object-capabilities subset of JavaScript like Caja can be used. Such mechanisms ensure that untrusted code can only go as far as the end-user lets it—which, unfortunately, also puts a burden on said end-user. While I don’t personally mind having such a burden myself, I know I wouldn’t want to put it on my friends and family.

Digital certificates are another component of potential solutions, but they too have their own problems. While they’re easy for centralized corporations to deal with, they’re problematic for more distributed operations, and the monetary cost involved in obtaining one significantly increases the barrier to entry for individual software authors. And even signed code doesn’t prevent the more privacy-invasive—but not outright malicious—classes of software like spyware.

As I’ve indicated earlier, this general issue of trust isn’t a new problem, or even just an important issue for an experimental Firefox addon. It’s what Zittrain believes is at the core of the future of the internet and the PC, and the solutions we create—or don’t create—will determine whether their future is one that is sterile or generative. Windows Vista, being the most frequently exploited operating system as a natural result of its widespread use, is the harbinger of a future that relies entirely on technology and corporate trust heirarchies without taking any kind of social mechanisms into account; the result is a notoriously hard-to-use user interface that places an enormous burden on the end-user to constantly make informed security decisions about their computing experience. I don’t think that the answer is merely a “less extreme” version of Vista—which is the model that most other operating systems and extensible applications seem to be following—but rather that a more effective solution is primarily a social one that is supported by technological tools.

I have a particular solution in mind that I’ll be writing about soon. That said, I’d still love to hear any thoughts that anyone has on this topic.

© Atul Varma 2021