I’m just some middle aged white guy, why is name privacy so important to me?

No Hate. Copyright © 2011 by Shadi Fotouhi
No Hate. Copyright © 2011 by Shadi Fotouhi

I’m just another middle-aged, reasonably well-off, American white guy.

So why do I believe so strongly in the importance of letting people control who sees their real name, when you don’t?

I was thinking about that this morning, because I know that if you’d asked me this question three years ago, I would have been strongly pro-privacy, but I would not have been as passionate about it as I am now. What’s changed?

The difference is that in the past three years, I’ve spent a lot of time socializing with people who are private about their birth names. I’ve met them on Twitter, and I’ve met them in person. I’ve even driven across the country to meet up with friends whose birth name I didn’t know until I was camped out on their couch. As a result, I’ve heard things that you just don’t hear when people have to use their birth names in public.

When you create a social networking site that requires real names, you create an artificial bubble. What you see is just the nice things in people’s lives, you don’t see what’s really happening. But when people have control over who knows their name, they still talk about cute cats and the latest iPhone and what kind of wine they drank last night, but they also talk about other things. They talk about dealing with their parent’s Alzheimer’s. They talk about how their daughter was missing for three days and got drugged and raped and the police refused to follow up. They talk about how they just lost their job and they’re worried that they’ll end up on the street. They talk about how their boss will fire them if he finds out they’re gay. They talk about how they were sexually abused as a kid. They talk about what it’s like to live in a country where bloggers get thrown in prison. People don’t dare talk about those things with their birth names; not when Google is indexing everything they say.

When you avoid or ban people who protect their birth names, you create an artificial world, one that doesn’t reflect what’s going on in the real world. When you surround yourself only with people who are using their birth names, you get the impression that everything is fine out there. That this is America, and people don’t discriminate, people aren’t ending up on the street through no fault of their own, people aren’t getting stalked to their doorsteps because someone learned their name, and people aren’t being judged by their sexual orientation. You’re surrounded by people who seem to be just like you, because the conversation has been reduced to what’s acceptable at the work watercooler.

The sad thing is, if you’re dealing with something difficult in your life, that bubble also makes you think you’re alone. You think you’re the only one, because nobody else is talking about how they’re going to pay for their parents nursing care, or how hard it is to juggle work and family.

Of course, maybe you don’t want to hear about other people’s problems on Google+. There’s nothing wrong with that. I don’t particularly want to hear what kind of wine Robert Scoble had last night, so I don’t circle him. If you don’t want to hear about how Jane S is dealing with her son smoking pot, then you don’t have to circle her. But that doesn’t mean that Jane S shouldn’t have a right to join Google+ and comment on your post about the latest merger, or give her opinion on the riots in London, or talk to friends who do want to talk about raising kids. Just because she protects her privacy more than you, doesn’t mean her opinion isn’t valuable. Furthermore, having people with different backgrounds in a discussion makes for a far more educational and interesting conversation.

Google’s name policy is intended to create the illusion that we are all at a fancy restaurant; they’ve explicitly used that metaphor. Unfortunately, in doing so they have denied access to a lot of interesting people; to teachers, lawyers, doctors, activists and government employees; people who aren’t allowed to use their real name to express their real opinions. And they’ve driven away a lot of people with a very legitimate need for privacy; the abused, the victims, the stalked, the discriminated against. That wasn’t Google’s intent, but they believe that losing ten or more percent of the population is a legitimate cost in their goal to create the illusion of normalcy.

I think people who say “I’m more comfortable talking to people who use their real names” or “they should find another social network” don’t realize just what a broad swath of the population is being eliminated by this policy. They don’t realize, because they’ve never had an honest and open conversation with anyone affected by it. They don’t know that their co-worker is gay, or that their favorite barista got raped last month, or that their son’s teacher is an atheist. They don’t know that the person they are banning may be a neighbor or even a friend. They also don’t realize how important online social networks are to people who don’t have the freedom to talk to their peers in any other environment. Social networks aren’t a “game”, they aren’t something you do outside of your “real” life. Social networks are a real place where real people meet, make friends, share ideas, create business relationships, and even end up getting married. And all of those things happen even if they initially meet without sharing their birth names. “Jane S” is just as real a person as “Jane Smith”, and perhaps even more so.

Google certainly has a right to create a fancy restaurant with an illusion that everyone is telling the truth about who they are. But it’s just that, an illusion. Many of us looked at Google as the one internet company that understood the importance of privacy. They stood up to China and left the market when forced to censor. They’ve fought the hackers who have attempted to keep Google from providing secure email to dissidents around the world. We thought that if Google was going to create a social network, they would create one that mirrored the real world. One where people had control over who saw their birth names and who didn’t. A social network that upheld the basic freedoms we expect in a democratic society. Instead, they just created a more authoritarian version of Facebook.

It doesn’t have to be this way. You can hit that “Send Feedback” button and tell Google that you don’t want them to discriminate. You can tell them that you’re happy to hear the opinions of people who don’t have the freedom and security to use their birth names. You can tell Google that you want to hear from people who come from different backgrounds than you. You can tell Google that you don’t really mind if that guy with the fabulous photos is called “John” or “JujuBoy”. You can tell Google that you want a social network where people are free to talk about all of their lives, not just the parts they don’t want in the paper tomorrow or in twenty years. Or you can decide that what you really want is a an artificial bubble where everyone talks about technology and cat pictures.

Personally, I prefer reality.

For more details on who is hurt by Google’s policy, read “Who is harmed by a real names policy”(http://j.mp/pojGSo) or my long post here: http://j.mp/pJC2PO (skip to “Who Needs a Pseudonym?”). If you have any other thoughts on why it’s bad to let people control who sees their birth name, please read that post first, I probably discuss them.

For my thoughts on privilege, a word I always used to find personally insulting, read my post here: http://j.mp/o2ApQ3. What I refer to as “being in a bubble” has a lot to do with the concept.

For some excellent personal statements on the importance of name privacy, see http://my.nameis.me/

If you’re wondering where I came up with “ten or more percent of the population”, that’s what I believe is a conservative estimate, based on the number of people on Facebook who don’t use their real names. Those people are disproportionately minorities and women. Read researcher Danah Boyd’s article ““Real Names” Policies Are an Abuse of Power” at http://j.mp/ojrQ3g. I can’t find the original reference to the percentage (can anyone give me a link?), but it was confirmed by my own check of a few Facebook groups I belong to.

Drawing by my daughter, Shadi Fotouhi. (Still too young to join Google+ :). [Well, that was in 2011. As of 2017 she’s graduated from art school and is doing QA for a robotics company.]

Original post on Google+ here: http://j.mp/qlY5jv

On Pseudonymity, Privacy and Responsibility on Google+

[This was originally posted on Google+ (https://plus.google.com/117903011098040166012/posts/asuDWWmaFcq) where it went viral for a while. It’s still my most popular post. Since then of course Google finally gave up on their “real names” policy. Turns out it didn’t actually improve the quality of discussion at all–and it hurt people. Facebook, OTOH, still deletes accounts using pseudonyms, and it continues to be a tool of attackers to shut down victims.]

Google has said that they plan to “address” the issue of pseudonymity in the near future. I hope that these thoughts and experiences may help inform that decision.

Protections for anonymous speech are vital to democratic discourse. Allowing dissenters to shield their identities frees them to express critical, minority views . . . Anonymity is a shield from the tyranny of the majority. . . . It thus exemplifies the purpose behind the Bill of Rights, and of the First Amendment in particular: to protect unpopular individuals from retaliation . . . at the hand of an intolerant society.
———— 1995 Supreme Court ruling in McIntyre v. Ohio Elections Commission

This whole persona/pseudonym argument may seem like a tempest in a teapot, but the fact is, the forum for public discourse is no longer the town hall, or newspaper, or fliers on the street. It is here on the Internet, and it is happening in communities like this, hosted by private sector companies. Freedom of speech is not guaranteed in these places. As +Lawrence Lessig once said,“the code is the law.” The code that Google applies, the rules they set up now in the software, are going to influence our right to speak out now and in the future. It is imperative that we impress upon Google the importance of providing users with the same rights (and responsibilities) as exist in the society that nurtured Google and brought about its success.

I’m going to try to summarize the discussion as I’ve seen it over the past few weeks. Since this is a long post (tl;dr), here’s a description of what’s coming so if you want, you can skip to the section that you’re interested in.

First I’m going to address some red herrings; arguments that actually have no bearing on pseudonyms. I will explain why I think we should be having this discussion about a company’s product. I’ll explain, through painful personal disclosure, the experience of close friends, and other examples, why someone might want to use a pseudonym. Then I will address the arguments I have heard against pseudonyms (and some of them are quite valid), and what some alternatives might be.

I apologize for the length of this post, I know it could be trimmed.

What is Twitter For? The Message is the Medium.

The other evening I added a few new people to the list of folks I was following on Twitter. It was one of those typical social networking things; checking your “friends” to see who they were tracking, and then adding the ones that looked interesting.

The result, oddly enough, was a late night conversation on the pros and cons of Welfare. It felt very much like those late night conversations I used to have in college; when everyone was full of ideas and eager to explore them. It was really quite enjoyable. And it certainly created a bump in my Twitter usage.

People who don’t use Twitter often ask just what it’s for. Why would you want to broadcast what you’re doing to the world. That’s partially the fault of the Twitter folks themselves, for that “What are you doing?” prompt. A better prompt might be “What do you want to do?” (although perhaps a bit too reminiscent of Babylon V :-). Adam Engst once called Twitter “iChat on shuffle,” and it certainly can feel that way when you’re carrying on several conversations at once. But Twitter isn’t so much a piece of software that does something, as a medium through which software can do things. When the Telegraph and Phone were introduced, people certainly wondered why you’d want one, but people found interesting things to do with them, many of which had never been anticipated by their inventors. That’s Twitter.

What’s interesting to me is the relative pluses and minuses of having this type of discussion in Twitter. Andy Ihnatko recently pointed out the rather obvious (sorry Andy) fact that trying to express complete thoughts to their conclusion in 140 characters is rather difficult. You can of course just post a second message to finish the thought, but the delays in Twitter make it less natural to do so than it might be in a chat program. The performance of Twitter reinforces that 140 character limit, and I’m not sure that’s a bad thing, because keeping points brief and concise may have the effect of equalizing the conversation. Nobody can dominate with a long exposition on a particular topic. Ratholes on side topics tend to be limited to pointers to URLs, not long conversations. You can use Twitter to espouse and justify an idea, but not to explain it in detail. But that’s fine. We have other media that are better suited to that. That’s not to say Twitter is perfect, separating incoming messages into categories (a New York Times news blast in the middle of a conversation is a bit distracting) and threading conversations (particularly when you aren’t following all the participants) would be big pluses. But those are things that Twitter clients can do, they aren’t necessarily drawbacks of Twitter itself.

Twitter’s limitations might make it seem superficial and trivial. But that’s like saying chatting around the water cooler is superficial. It can be, but it can also be a catalyst for new ideas that are followed up elsewhere. Andy’s Twitter posting was the catalyst for my writing this blog posting. The discussion on Welfare was the catalyst for making new connections on other networks. Social interaction takes place on many different levels, all of which are necessary. What we are seeing online is people taking new tools and adapting them, consciously or unconsciously, to fit the interactions they feel they need in a virtual world. The companies that are succeeding in the social networking sphere are those that either identify those needs, or more likely, have the flexibility to be molded by their users. Flickr, FaceBook, Twitter… none of those companies are necessarily doing what they started out to do, but they were able to adapt to the way people used them. In Social Networking, you achieve success when you stop being an application and become a transparent part of people’s interactions.

But enough of that. I need to go fix the Welfare System!

P.S. I’ve suddenly become very self-conscious about the fact that I seem to be very fond of semi-colons.

Definition: Buffer Overflow

Buffer Overflow

If you read any press on computer security problems, at some point
you are likely to come across the phrase “Buffer Overflow”–it’s by
far the most common security error that programmers make. It’s common
for several reasons.

  • It has nothing to do (by itself) with security.
  • It’s an easy error to make, and a hard one to detect.
  • It’s human nature not to expect the unexpected.

So what is a buffer overflow? I’ll start off extremely non-technical
here, and gradually bump up the level until the final section, at
which point if you don’t understand programming and call stacks you
may want to stop reading, and if you do understand them, you may
decide to start reading.

First, here’s the non-technical explanation.

You need to tell a co-worker something important, you go to their
office, expecting a conversation something like this:

“I though you should know about this new thing.”
“Oh? What is it?”
You tell them the important thing.

Instead the conversation goes like this:

“Hey! Just the person I wanted to see! Did you hear about this
crazy election thing,”…followed by five minutes of political
diatribe. By the end of the conversation, not only have you
forgotten what you came in to say, you’re on the way out the door
with a poster to protest something.

Your buffer just overflowed, and you were hijacked for a purpose
other than your original intent. You had an expectation of how the
conversation would go (the protocol) and it was violated, with the
result that you ended up doing something different. That’s exactly
what happens to a program when someone exploits a buffer-overflow

Now a slightly more technical explanation.

When a program is designed, it is designed with an interface to the
outside world. That interface is not just what you see on the
screen, but also how it communicates with other programs and the
operating system. The interface is typically defined in terms of
either an API (a set of programming conventions for direct
communication with another piece of code) or a protocol (a definition
of a set of data and commands to be passed between programs). Think
of the API as how your brain tells you arm to pick something up, the
protocol as how you ask someone to pass the salt. Of course the
protocols are not always executed directly. Your brain tends to use
the mouth API to tell someone to pass the salt, rather than using
telepathy directly, and many programs use standard sets of code
provided by the operating system when they want to use a protocol.

Now, these APIs and protcols specify the form of the information to
be passed back and forth. For instance, a specification might say
that the correct response to an initial communication is no more than
five letters long (e.g. “Hello”). In the days before people had to
worry about hostile programs, code was written assuming that the
program you were talking to was going to be following the rules of
the protocol. If the protocol said “five letters” then there wasn’t
a lot of point in leaving room for six. Sure, your program might
crash if there were six, but it wasn’t your bug, it was a bug in
the program talking to you–it should have sent five letters.

So that’s a buffer overflow. You expect one thing, and somebody
sends you something much bigger. The “buffer” that you had set aside
to store that information doesn’t have room for what you get, and you
end up writing those six (or six hundred) letters on top of other
things that you were trying to remember. Obviously that’s not going
to be a good thing for the continued functioning of your program, but
it turns out it’s also a major security problem.

And still a bit more technical.

Computers tend to think in terms of two things–code and data. Code
consists of the instructions for the computer, telling it what to do.
Data is what it does it to and with. When you run a program, it
loads into memory both the code and the data that code needs. When
that program communicates with some other program, it is receiving
data, and it will then use the code that it already has to figure out
what to do next. This makes remote communication relatively safe.
The remote program can only tell the local program to do within the
constraints of the original code. Assuming nobody has done anything
stupid (which is not generally a good assumption), the remote program
cannot tell the local program to do anything that wasn’t originally

Modern computer architectures have an unfortunate design, however.
They don’t really no the difference between data and code. If
somebody can convince your program to try running the data that it
has in memory, it will do so quite happily. So a malicious program
has two goals. First it wants to get some code to your machine, and
then it wants to persuade somebody to run it. This is of course, no
different than an email virus writer’s goal. In that case, they
expect you to run it, in the case of a buffer overflow, they expect
the broken program to run it. Email viruses are so successful
because users often don’t know the difference between data and code
either (and some operating systems helpfully try to hide the
difference so as no to confuse them).

It turns out that if a malicious programmer can find a target program
that didn’t check for a buffer overflow, it can be very trivial to
get that program to execute code provided by the remote program. So
easy, in fact, that there are standard packages out there that
provide the entire payload for the overflow–all the script kiddie
(we’ll define that sometime, but suffice to say it isn’t a compliment
of someone’s hacking prowess) has to do is find the write length for
the buffer overflow and bang–they have control of your computer.

Before you panic, remember that doing this requires that they have
remote access to a program on your computer already, and that that
program have a buffer overflow problem. That means (for an internet
exploit) that your computer has to have some program that is
listening to external connections (e.g. print server, file
sharing…) or that you have a malicious user at your computer (or
you helpfully downloaded and ran their software).

Now let’s get completely technical.

How does a buffer overflow exploit work from a programmer’s perspective?

First you find some place in that program where it’s reading data and
assuming that it’s going to be reading something rational. E.g.

        char    buf[4];      /* Store 4 characters */
        gets(buf)               /* Read any number of characters from the input
                                                and put them in buf */

where the input turns out to be more than 4 characters long.

Now the question is, where is the data stored in “buf” located?

If “buf” is a global variable, then that data is probably allocated
in a data segment somewhere, and you’re going to try and overwrite
some other piece of data which will result in something useful (e.g.
a place where the program was going to execute one program, now
executes another). That’s tricky and hard to do without source code.

However “buf” is probably a local variable, allocated on the stack.
So instead of overwriting data, your goal is to overwrite the stack
itself. So you are going to put in buf some amount of padding (that
will overwrite the rest of the data stored on the stack), followed by
some machine code that overwrites the part of the stack that had code
on it. You’ll set things up so that your code will be executed
(possibly when this particular function returns) instead of the code
that normally would have been executed. Now you’re home free. Since
there are plenty of examples of sample exploit machine code, all you
need to do when you find a new buffer overflow is figure out the
appropriate offset–the rest of the work has been done already. You
don’t need to transfer very much data, just enough to run something
that connects you to the remote machine–from there you can transfer
the rest of the software you want to install remotely.

This is where security-by-obscurity comes in handy. Want to lessen
the chance of buffer-overflow attacks? Just run some obscure piece
of hardware. Run a Mac, or even Linux on the PowerPC1Of course with Apple switching to an Intel platform, some of that obscurity goes away, but exploits still have to vary from operating system to operating system, even if the underlying processor is the same.. It’s not that
there aren’t buffer-overflow problems, but their are less handy
examples of how to exploit them running around. Less examples, less
successful attacks. It’s not a solution of course (especially if
everyone does it :-), but it is one way to slightly increase your
odds of remaining secure.

There are machine/OS architectures that would make buffer overflows
much harder to exploit. Disable dynamic creation and execution of
code on the stack for one. Or keep a separate data stack. And there
are tools out there which will put watchdog data on the stack, and
then watch it to make sure it doesn’t get overwritten (effective, but
rather painful from a performance standpoint). But fundamentally,
where there are bugs, there are exploits. And modern software, with
it’s layers and layers of abstraction that no one person can fully
grok, has a hell of a lot of bugs.

700 ISPs?

Recently Scott McNealy, predicting consolidation in the ISP market, was quoted
as saying that we no more needed 700 ISPs than we needed 700 electric power
companies. That’s an interesting analogy when you consider that the electric
power industry, now approaching deregulation, is probably approaching 700 companies
itself, many of whom don’t even own power facilities.

As usually, Scott is being quotable. Realistically, as more detailed comments
have indicated, it’s the medium sized ISPs that are likely to consolidate. Smaller
ISPs serve niche markets and personalized service that are not likely to be
attacked by the larger players. I don’t believe the numbers of small ISPs are
likely to decrease, in fact, as I sit here after just having driven through
the sparsely settled high-plains of Utah, I suspect that the market for small
ISPs is far from saturated.

It’s dangerous to judge the progression of the internet by the progression
of businesses past. While it’s true that in areas of high competition, the service
and hardware requirements for an ISP are high. It is also true that in many
areas, anyone with a leased line and a couple of modems can still become an
ISP. While all eyes are turned towards the big IPOs, it’s the small businesses
that will keep the internet alive.

[2017: I was wrong. In fact McNealy was wrong, it’s more like 50.]

Why I do *not* support Family PC’s Parent’s Bill of Rights

When was it that everyone started to talk about
rights, and forgot all about responsibilities?

[A note from the future, in 2017. I was right. I didn’t get more conservative, and they grew up safe and awesome. One’s editing movies. The other’s working QA at a robotics company. Let’s hear it for sensible parenting.]

Kee's kids--A long time ago
Before I begin, let me set
some context. I’m a parent, I have two terminally cute daughters; one six, the
other four. I’ve heard that the number one correlation between sexual conservatism
and other factors is whether a person has daughters. Maybe things will be different
when they reach adolescence, but so far my values haven’t changed.

So now we have had a
everyone’s talking about how to protect the rights of
parents on the internet.
This is apparently something that greatly concerns many parents,
from a reading of the statistics,
I can only assume that it’s primarily a concern of parents who are
on the internet, since those that are, aren’t even using the available
tools. But I don’t mean to belittle the core desire–parents
want to make sure that children’s exposure to new concepts and people is
consistent with their beliefs, whether that exposure is on the internet, the
street, or the corner store.

And that’s the fundamental issue I have with all this ruckus. The
internet doesn’t exist as a thing, it isn’t something that’s safe or not
safe. The internet is a community of people, and the things you
have to teach your kids in this community are the same as the things you teach
them in your own. Be polite, don’t interrupt, don’t speak unless
you have something to say, stay away from the seamier parts of town, and of
course, don’t go off alone with strangers. Those are values I try and teach my
kids. If I haven’t gotten them across by the time they learn to
send email, it’s probably too late anyway. But these values are not
specific to
the internet–I expect them to be applied online, and down at the
coffee shop.

I think I know where things went wrong. Some parents thought that if
their kid was staying home in front of the computer, that they were safe
and could be left alone–just like when they were sitting in front
of the television. They were wrong of course, the two mediums are not
comparible–there is far more violence and sex on television.

But on the internet, no one knows you’re a dog. What good does it do to teach
your kids right from wrong, if someone can pretend to be a teenage soulmate,
when they are actually a lecherous old man? There is some validity in this,
but frankly, anyone who has spent much time in online communities very quickly
learns that identity is both central to, and yet completely apart from, the
online experience. I spent my freshman year in college hooked on “the con”,
as it was called by those of us with access to
Time Sharing System
years before AOL’s forums and IRC. We all knew
the story of the guy that gets all excited about this great girl he’s been chatting
with for hours, only to walk over to his roommate’s cubby to tell him the news–and
find out he’s been chatting with him all this time. The notion of an
online identity, or identities, that is separate from your physical one is fundamental
to the system–our children will understand that long before their parents.
This isn’t the dark side of the internet, this is one of the liberating things
about the internet. (Note that having multiple identities is not the same as
being anonymous, I’ll talk more about that some other time–if you want some
mandatory reading on that subject, check out “The Transparent Society : Will Technology Force Us to Choose Between
Privacy and Freedom”
by David Brin.)

“But…,” (says my wife), “it is different, you think they
are safe because they are in the house. With other communities, you know where
they are.” Well, one can hope, but the teen
rate in the U.S. would seem to argue otherwise. The fact of the
matter is that, the older your kids get, the less control you have over them.
That’s why I see this whole thing very differently. This isn’t an issue of parent’s
rights, it’s a question of parent’s responsibilities, and that’s
a word that seems to be very much out of favor recently. All the internet is
doing is bringing home that our job as parents is not to control, but to guide.

Here is FamilyPC’s “Internet Bill of Rights” proposal, with my responses. They asked if they
could publish by responses, so if you see it in a physical copy, let me
know which issue.

  1. Parental blocking software should be integrated into every Internet

    Including experimental
    ones? Ones meant for developer use only? Ones running on PDAs that
    have the necessary memory or CPU? No. If the demand is there, the
    will provide it. Protection of children is the responsibility of
    the parent,
    not something that can be regulated.
  2. Web site creators must
    rate their
    sites in an industry-standard way that is recognizable by the browser
    (for now this means using RSACi or SafeSurf or PICS).

    Aside from being
    unenforcable, there is no need to do this. If people stop going to
    sites, then sites will rate themselves. The fact of the matter is the
    majority of sites that rate themselves are adult sites–they don’t
    minors on their sites. The rest of sites don’t have the time or
    to rate themselves.
  3. An arbitration board should be created to
    discrepancies in site ratings.

    That implies that ratings have the
    of law. Ratings are going to be relative my definition. An independent
    board cannot be created to legislate free speech. If we were
    talking about
    signs on a front yard, this wouldn’t stand up in court for a

  4. Webmasters who do not comply with voluntary ratings should not be
    on the major search services.

    Absolutely not. This restricts adult
    to sites, never mind access to sites outside of the United States.
    engines are already beginning to offer alternative, rated-only search
    facilities. There is no need to legislate this.
  5. Children’s chat
    will be monitored to keep them safe; monitoring can be human or

    If you are worried about what your children say to whom, then
    them. Don’t forget to tape phone conversations and follow them to the
    school bathroom as well. Chat room monitoring is neither practical or
  6. Web sites must fully disclose what they do with
    collected from people who register at their sites.

    This is a
    general issue
    that has nothing to do with the specific issue you are addressing
  7. Advertising must be clearly labeled as advertising and kept
    from editorial content.

  8. If online shopping is involved,
    must require parental permission prior to purchase. Parents will be
    to cancel an order mistakenly sent by a minor at no charge to the

    The standards here should be the same as they are anywhere else.
    Use of
    a credit card is deemed to be an indication of adult status.
  9. If an
    advertiser communicates with a child by e-mail, the parent should
    be notified
    and should have the option, with each mailing, to discontinue

    If you want to disallow communications with children by advertisers, I
    might consider that a good goal. However, “on the internet no one
    your a dog”. It’s impossible to tell whether you are communicating
    a child on the internet. As for the ability to remove yourself from
    mailings–go for it, but this is a general issue, not one specific to
    children’s/parent’s rights.
    Frankly I find the whole concept of a
    Bill of Rights” to be misguided. First we need to construct a Parent’s
    Bill of Responsibilities. For the past 15 years my closing email
    has been the same. And every year I feel it is more and more
    “I’m not sure which upsets me more; that people are so unwilling to
    responsibility for their actions, or that they are so eager to
    everyone else’s.”

Privacy Distribution Mechanisms

I originally wrote this article in 1997 and posted it to my “blog” (back then I called it a ‘zine) as the first entry. As such the links are horribly out of date, and the formatting is a bit rigid. Fortunately, the OPS system described here died of neglect. But I’m sure it will come back in one form or another.

Kee Hinckley – Sept 14, 2005

Privacy Distribution Mechanisms

When does a privacy enhancement
become a privacy distribution mechanism?
the guise of providing greater user privacy, Netscape, Microsoft and Firefly
have greatly increased the consumer information that will be available to
web sites.

A few months ago Netscape,
and Firefly together
announced a new initiative, the Open Profiling System (OPS) aimed at quelling
user fears over privacy invasions on the internet. It was a great success
(from a PR standpoint at least, implementation lags announcement, as usual).
The press picked it up and reported on it widely, but nowhere did anyone
seem to examine what this will really mean when it is deployed.

The Open Profiling
Standard (OPS)

a proposed standard which enables the trusted exchange of profile
information between individuals and Web sites, with built in privacy
safeguards. Firefly, Netscape, and Microsoft will work together
on the OPS proposal during the remainder of the standards review
process of the World Wide Web Consortium (W3C).

OPS is designed to enable personalized
electronic commerce, content and communication while providing
a framework for the individual’s privacy. OPS gives each person
complete control over the exchange and usage of their personal
information across the Web and also saves them valuable time
since they only have to enter their information once.

OPS offers Web sites a greater understanding
of their audiences therefore dramatically improving personalized
online content, marketing and commerce.

Original link
was to
the site has since reorganized.

“OPS brings us
one step closer to market-based solutions for privacy protection,”
says Christine Varney, former commissioner of the Federal Trade
Commission (FTC)

The key component of
the proposed standard is the ability for users to manage how much
information their browser gives out and to whom particular information
is given.

What you may not know
is that the industry already has an excellent solution: the “Open
Profiling Standard” (OPS).

“OPS is a great
first step,” Gaddis says. “It raises consumer awareness
and allows consumers to protect themselves against the few bad
eggs that are present with any transaction.”
  But before we get into that, let’s step back for a moment and look at
the whole issue of privacy on the internet. This is an area fraught with
emotion, and greatly lacking in hard analysis.
When the web began, no one was thinking much about privacy. The HTTP
protocol provided a way for a browser to specify the identity of the user,
and many browsers sent that information, either in the form of an email
address, or just the initial account name. The server happily collected
the information and logged it in the log files. Early web servers even
had code which could be used to connect back to the sender’s computer
and (depending on the type of computer and the software running there)
verify the actual identity of the user (IDENTD).
These features were primarily used for tracking how many users (as opposed
to browser “hits”) had visited a site, and for contacting someone
who was apparently having trouble (lots of hits to mispelled pages or
some such) and helping them out. Those were the innocent days.
As web use increased, some people started realizing that they didn’t
really want every site they browsed to know who they were. People complained,
and the browser authors stopped sending the user identity. The log files
stopped receiving that information (although the empty identity field
still resides there–filled in only if the user provides a username and
password for a secure site).
Some time thereafter, two new information sources became available to
web site developers. Some browsers began sending a “referer
field–a piece of information that indicates the URL that the user was
viewing prior to reaching the current web page, and the Netscape browser
(followed by others) began allowing sites to stash a small “cookie
that would be remembered for a specified period of time, and retrieved
any time the same site asked for it. Although cookies get all the press,
the referer field is actually the only feature capable of exposing personal
information that you’d rather not reveal. But this whole issue has everything
to do with emotion, and very little to do with facts. Let’s look at the
two features.
A “cookie” is a computer term for a small piece of information
that gets tucked away somewhere by a program for future retrieval. Sometimes
they are called “magic cookies”. The name implies an informal
storage mechanism, and typically cookies aren’t explicitly stored by the
user, they general contain internal information that the program needs.
Programs use them all the time. When you restart a program and all the
windows come up in the same place as the last time you ran it, when you
bring up a search dialog in your word processor and the text of the last
item you searched for is sitting there pre-selected–those are all examples
of a program stashing away a cookie with some information in it. It didn’t
ask you if you wanted to save that information, it just stored it for
convenience’s sake. We don’t tend to think of those as privacy risks (although
if the last search you did was for “big fat boss”, and the next
person to use your computer is the aforementioned boss, you might think
The cookies stored by your browser are no different. When you go to a
web site, it has the option of asking your browser to store some information
about your session so that it can access it at some future date. That
information is usually a session identifier, or some other data that will
enable the site to recognize you when you return. The site may use it
to remember your login information, or pre-fillin that complaint form
so you don’t have to do it again, or just track the happy fact that you
have returned to the site. The cookie does not, and can not, contain any
information that you haven’t already provided to the site. It also cannot
be passed to any other site, so the information you enter on one site
can not be snarfed by some other site.
Referer Fields
Referer fields are slightly different. What they tell a site is how you
got there. Within a site they are often used for tracking your movement
so that the user interface designers can look at how people are using
a site and modify the interface to better give people access to sections
that aren’t being visited. However what is usually of more interest is
the site that you were on before you came to this one. That gives site
owners an idea of which remote links are most useful and/or cost effective.
The catch is that browsers don’t just pass the referer field when you
click on a link, they also often pass it when you type in a URL. So it
is possible that sites will pick up the fact that the previous site you
were visiting was, shall we say, not one that you might like the world
to know you were visiting. It’s rather like stepping out of the adult
bookstore and bumping into your next door neighbor.
Oddly enough, though, the referer fields have never really caught on
as a “privacy risk” in the press. So be it.
Selling Yourself
As you travel from one site to another on the web, you may be amazed
at how much is being given away for free. Research reports, news, travel
directions… the list goes on and on. And it’s all free! Sites that charge
money for access are few and far between.
Appearances can be deceiving. In fact there are many, many sites on the
web that are charging for access, it’s just that the currency isn’t what
you are used to. Instead of cash, the currency is personal information.
Information about your age, your sex, your marital status, your wealth.
Some sites are subtle (Lucent’s MapsOnUs
lets you use the site several times before it asks for some information
about you (couldn’t do that without cookies :-). Other sites barely let
you past the front page before insisting that you register. Other’s tempt
you with a contest
of some sort
. But the end result is the same, you’ve sold some part
of your electronic soul for access to the site. You’ve exchanged one sort
of information for another.
But what will those people do with that information? Will they sell it
to a mailing list? Will it be picked up by spammers? Will tons of junk
paper mail start arriving at work? These questions started the privacy
experts questioning the whole process, although in practice this is no
different than filling out a magazine’s bingo card (and usually far more
rewarding). In stepped Netscape, Microsoft, Firefly and others with the
OPS, a combination of two technologies and a business practice addressed
at giving users more control over their privacy–at least in theory.
The technologies are the vCard standard
from the Internet Mail Consortium, and
(also known as X.509),
an IETF (Internet Engineering Task
Force) standard. The vCard standard specifies a format for storing and
exchanging personal information (typically the type found on a business
card, but it can cover just about anything). Digital Certificates provide
a mechanism for providing secure storage and transmition of identification
information–the driver’s license of the internet.
The business process that ties these together is a promise from companies
signing up for this standard that they will adhere to certain privacy

Web sites that
adopt OPS are strongly encouraged to adopt a recognized privacy
assurance program that includes third-party auditing, and to clearly
post their privacy policies on the site where visitors can see
them. In addition, consumers are cautioned not to release their
Personal Profile to any site that does not post its privacy policies
and submit to third-party auditing.


As business practices go, that one is pretty weak, and nothing that couldn’t
have been done without all this new technology. So what does the new technology
provide to enhance privacy?
Frankly, nothing. What the OPS does is let you only enter your personal
information only once, so that when a site asks for your information,
it becomes incredibly easy to provide it. Where before you might have
had to fill out a form with home and work addresses, sex, marital status,
income and the like. Now you can just hit the “Okay” button
on your browser and have all that information automatically sent to the
remote system. Where before you might have skipped the non-mandatory fields
in a form, now you’ll send them anyway, it’s not any harder.
In sum, the OPS is really a mechanism to make it easier for consumers
to tell vendors information about themselves. It provides no more control
over privacy information than the current “fill out the form”
mechanism, and is far more likely to increase the distribution of personal
information to multiple companies. It’s not a “bad” technology
in any sense, but the PR that it has gotten is deceptive–OPS does nothing
to enhance privacy.