Forgetting: Logging as an ethical choice

I have kind of a weird idea for a database person.

Forgetting should be built into our applications by default. I just spent the weekend at FooCamp, and I held a session to discuss this idea, and some of the possible consequences if it were implemented.

To explain why I think this, I’m going to take an extreme stance for a moment and argue a position that I’d like to see rebutted. So, please have at it! 🙂

For too long we have allowed decisions made by developers – default application settings – to determine what ultimately become surveillance levels.

There are notable counter examples: 4chan intentionally expires postings every few days. Riseup keeps no logs. The EFF documents what we do and do not legally need to keep. These, however, are the efforts of a tiny minority when considered against the rest of the web.

Over time, our conception of what is reasonable has changed around logging and accounting for vast periods of our activities. Never before would a silly recording taken by a 15-year old be stored indefinitely, and then be documented as a watershed event because of how many times it was viewed in a vast global network, rather than for the content of the cultural artifact itself. The log of views themselves were the cultural artifact, and it is celebrated.

Fading away isn’t evil. But we act like it is when we pipe what once was ephemeral into indefinite storage.

Why have we decided to participate in this social experiment? It really wasn’t a collective decision. Some software developers and investors decided that archival on a massive scale was important or profitable. We started calling these things “part of history” and just storing them without thinking about it. Saving became default.

I’m not saying that archiving the internet, search robots or “opting in” are bad things. But those who least understand archiving’s effect on personal privacy may be the ones most likely to suffer in the future.

The ripple effects of the decision to move from “default expire” to “default save” are vast. Consider for a moment if we were to call the ability to intentionally forget on the internet a human right.

Instead, what we’ve done is to say to millions of people – you do not have the right to forget. Companies will take your locations and status updates, and never delete them. And privacy is rapidly becoming a privilege of those who can afford to buy it.

For the sake of argument, consider the difference between narrative historical documentation and collections of “facts.” The narrative is an aggregation, full of embellishments and forgetting and kernels of truth. Facts are collected, supposedly objectively. Both approaches to capturing historical thought suffer from the fallacy that historical “fact” is fixed and doesn’t evolve based on the viewer and reteller over time. How much worse is this effect when our collections of facts are now ballooning to include every blog post, photo, tweet and web access log you’ve ever made?

The point is not that individuals wish to change history or even obscure events which may reflect poorly on them. (Even though we all do!)

We need to give people a real choice – not a set of ACLs and rules. Choice about what is archived about them, control over that process and a clear delineation between personal artifact and public property.

Kathy Sierra deleted her twitter stream and was accused of removing a piece of history, and possibly the worse internet offense – taking away conversations. Taken at face value, isn’t that the point of conversation? That it is ephemeral?

Conversations leave echos in changed thoughts and light or deep impressions in the minds of the participants. Just because Twitter has by default chosen to retain these conversations indefinitely doesn’t change the nature of conversation itself. No one would argue that just because we share our thoughts that we are obligated to share every thought.

In the same way, we are not obligated to maintain a record of our sharing. And if we do maintain and share a record of our own end of a conversation, we still have the right to ultimately destroy it.

Once shared, of course, an artifact of a conversation can’t be taken away from those that have copies. But authors and owners of the original work must always retain the right to destroy.

So, that brings me to what is ethical in our applications. When we say: “we’re keeping your data forever” and “delete means your account will still be here when you come back”, application developers and companies are making an ethical choice. They are saying, “your shared thoughts aren’t your own – to remember or forget. We are going to remember all of this for you, and you no longer have the right to remove them.”

Connectedness is not the same as openness. Storing vast logs of data related to individuals which connect thousands of facts over the course of their lives should be presented as the ethical choice it is, rather than a technical choice about “defaults”. Picking what we decide to log and store is an ethical and political decision. And it should also be possible for it to be a personal decision.

Learning to think, session II

I attended a free session on “how to think” given by Hideshi Hamaguchi (his twitter feed) last Friday night. Not only did I manage to turn what was essentially a design geek user group meeting into a “date night” with my husband, but I left the meeting with the delicious feeling you get when you’ve learned something really useful.

The session was focused on designers and design thinking. I found it applied even to my work – programming and database design, much of which I’ll claim is creative. I took many, many pages of notes – sketching out replicas of Hideshi’s carefully drawn diagrams. One lesson that stuck with me over the weekend is captured in the diagram that starts this blog post.

It’s a behavior-over-time graph, describing the transition from strategy to execution, with the line showing the growth in what you know about the problem you’re trying to solve. Briefly, strategy is defined as the combination of decisions that are needed to make a decision right now. Execution is what you do after you’ve made your decision. The vertical line shows the point at which you might decide to start thinking, or synthesizing information you’ve gathered. In the graph, that thinking line is pretty far along in the “what you know” curve. The length of time up until thinking begins is a missed opportunity — business-wise and creatively.

Consultants typically like to gather information – maybe asking lots of boiler-plate questions of the client before embarking on the “thinking” phase of consultation. Hideshi suggested that instead of allowing information gathering to delay thought, we should all just immediately start thinking.

He gave the example of FedEx, and what a person who was about to talk to FedEx would know without asking any questions of the company: guaranteed delivery times and hub-spoke architecture for their delivery system. Nothing is earth-shattering about those observations. They are simply things that you already know, and can use.

And here’s an observation I really thought about afterward: the length of time before you start to think is determined by your fears. The fear can be of the unknown, not having enough information, looking stupid or any number of other fears that we all have in a new situation. Taking a moment to reflect on what you already know might be the best strategy for eliminating that fear, and moving on to the useful, creative thought a client may be paying you for.

Much of the rest of the session was an exploration of a few ideas Hideshi had encountered in the last few weeks – creating a Museum of Design in Portland, and couple presentations he had made to help a famous blogger judge a Standford University “innovative ideas” competition. Both were fun thought exercises, with the added bonus of seeing Hideshi’s creative output.

I’m very much looking forward to the next session.

Coders for software engineers

I read this article about computer science education this morning –

Software Engineering and the Cause of the CS Enrollment Crisis

I propose that our current undergraduate computer science programs are designed to produce coders for software engineers.

Yeah. This is so true! I immediately thought of Shelley Powers’ comments about what we should do with computer science curriculum:

Break up the computer science programs, split the participants into specialized fields within other disciplines, and stop spending all our time on talking about Ruby and how cool it is.

(btw, I don’t mind talking about how cool Ruby is.)

So much of programming for a business is finding solutions for real-world problems. And you need to do that cost effectively. There’s a lot of “value engineering” in there, rather than perfection. And for me, I think there’s often way too much emphasis on correctness for correctness sake, in education and in the user group circles.

Here’s another choice quote:

It seems to me that the cause of the student’s disdain for “programming” and for the decline in CS enrollment lies there. As civil engineers need armies of construction workers to build their designs, and as mechanical engineers use armies of factory workers to produce their designs, so do software engineers use armies of programmers or coders, people who are explicitly not software engineers, to produce their designs. Few students go to college to become construction or factory workers. Why should it be surprising, then, that few Western students want to go to college to be the Information Age equivalent workers?

and a final point about creativity:

Computer scientists do not need to write good, clean code. Science is about critical and creative thinking. Have you ever read the actual source code for great programs like Sketchpad, or Eliza, or Smalltalk, or APL 360? The code that I have seen produced by computational scientists and engineers tends to be short, without comments, and is hard to read. In general, code that is about great ideas is not typically neat and clean. Instead, the code for the great programs and for solving scientific problems is brilliant. Coders for software engineers need to write factory-quality software. Brilliant code can be factory-quality. It does not have to be though. Those are independent factors.

Hell yes! I feel like so much of my computer science classes sucked the fun out of computers. The most fun I ever had in class was showing people how to use makefiles in the lab. By which I mean, not fun.

Fun was tracking down the exploits and then the crackers who broke into our servers, getting all the evidence together and talking to the FBI. And after that, learning about ways to monitor the system that wouldn’t be detected by intruders, but would immediately tell us someone just managed to get elevated system privs. That was engaging. I did that work as a junior in college, but a first year CS student. And I learned something I’ll never forget about operating system privileges and system administration (thanks, Steve).

What about the third term of my Intro to CS class? Or my software development class? I was bored. I did the homework as fast as I could to get back to my real job.