Thursday, March 29, 2007

What I'm Working On

In response to an in-house request to describe, in accessible language, what it is that I'm working on.

I don't mind explaining - though I will confess it's difficult to explain. It really combines a number of quite distinct ideas in a way that isn't always clear.

The idea is based in e-learning but isn't limited to that. The challenge of e-learning has always been to locate and deliver the right resources to the right person. A *lot* of digital ink has been spilled on this. Mostly, the way people approach it is to treat online resources as analogous to library resources, and hence to depict the problem of locating resources as a search and retrieval problem. Which in a certain sense makes sense - how else are you going to find that one resource out of a billion but by searching for it?

And some good work has been done here. The major insight, prompted by the Semantic Web, was that resources could be given standardized descriptions. In e-learning we got the Learning Object Metadata, which is a set of 87 or so data fields that e-learning designers should provide in XML format to describe their learning resources. This would allow for searches - not just keyword or phrase searches, Google already does that, but structured searched. For example, Google could never discover a resource that is best for Grade 10 students, but if somebody filled out the TypicalAgeRange tag then the resource would become discoverable.

That, indeed, has always been the limit of data mining technologies. No matter how good your analysis, you have only the resource itself to look at. And sometimes these resources are pretty opaque - a photo, for example - and while we can (and do) locate resources on the basis of their similarity to each other, we cannot differentiate between physically similar, but otherwise very different, resources. Consider, for example, the problem of detecting pornography in image libraries (from either the standpoint of retrieval or filtering - it's the same either way). It's not just a question of being able to distinguish between porn and sports coverage of a swimming meet, but also distinguishing between porn and medical journals, anthropology and art. Naked bodies always look pretty similar; whether one is scientific or pornographic is a matter of interpretation, not substance.

On the internet, what some people have realized is that this sort of problem is not so much a problem of description as a problem of relation (good thing, too, because studies showed that nobody was going to fill out 87 metadata fields). A type of technology called 'recommender systems' was employed to do everything from pick music to match you with your perfect date. A recommender system links three different types of data: a description of a resource, a description of a person, and an evaluation or ranking. In summary, we were looking for statements of the type, "people like P thought that resources like R were rated Q". This formed the basis of the sifter-filter project, which was adopted by some people in Fredericton and became RACOFI. Here's one presentation of the idea, which predtaes RACOFI: Here's another.

Part of this work involves the idea of the resource profile. This is a concept that is unique to our project. The main point here is that, for any resource, there are multiple points of view. The very same book may be described as heavy or light, as good or bad, as appropriate or inappropriate, depending on who is doing the describing. Crucially, it is important that the people producing the book not be the only ones describing the book (otherwise every bood would be 'excellent!!'). That's why we have reviewers. Looking at this more closely, we determined that there are different types of metadata: that created by the resource author, that created by the user of the resource, and that created by disinterested third parties (such as reviewers and classifiers). But now, when we look at this, the different types of resource, and the different types of metadata, it becomes clear, that any idea of thinking of metatada as anything like a document is misguided. Metadata is the knowledge we have of an object - specifically, the profile - but this varies from moment to moment, from perspective to perspective. My paper, Resource Profiles, describes this in detail.

The key here is this: knowledge has many authors, knowledge has many facets, it looks different to each different person, and it changes moment to moment. A piece of knowledge isn't a description of something, it is a way of relating to something. My 'knowing that x is P' is not a description of 'x', it is a description of 'the relation between me and x'. When I say 'x is P' and you say 'x is P' we are actually making two different statements (this is why the semantic web is on the verge of becoming a very expensive failure - it is based on a description, rather than a relational, theory of knowledge). One way of stating this is that my 'knowing that x is P' is a way of describing how I use x. If I think 'x is a horse', I use it one way. If I think 'x is a tree', I use it differently. This is especially evident when we look at the meanings of words (and especially, the words that describe resources). If I think "'x' means P" then I will use the word 'x' one way. If I think "'x' means Q", I will use it a different way. Hence - as Wittgenstein said - "meaning is use".

The upshot of all of this is, no descriptive approach to resource discovery could ever work, because the words used to describe things mean different things to different people. You don't notice this so much in smallish repositories of only tens of thousands of items. But when you get into the millions and billions of items, this becomes a huge problem (even huger when you add into the mix the fact that people deliberately misuse words in order to fool other people).

OK. Let's put that aside for the moment. As metadata was being developed, on the one hand (by the semantic web people) as a description format, it was also being developed (by the blog people) as a syndication format. That is to say, the point of the metadata wasn't so much to describe a resource as it was to put the resource into a very portable, machine-readable format. The first, and most important, of these formats, was RSS. I have been involved in RSS for a very long time, since the beginning (my feed was Netscape Netcenter feed number 31). It was evident very early to me that syndication would be the best way to address the problem of how to deliver selected learning resources to people. Here's where I first proposed it.

As we looked at the use of RSS to syndicate resources, and the use of metadata to describe resources, it became clear that content syndication would best be supported by what may be known as distributed metadata. The idea here is that the metadata distributed via an RSS feed links to other metadata that may be located elsewhere on the internet.

We used this to develop and propose what we now call 'distributed digital rights management'. The idea is that, in resource metadata, which is retrieved bu a user or a 'harvester', there is a link to 'rights metadata', in our cased described in open Digital Rights Language (ODRL). This way, the RSS metadata could be sent out into the world, distributed to any number of people, stored who knows where, and the rights metadata could sit right on our own server, where we could change it whenever we needed to. Since the rights metadata in the RSS file was only a pointer, this meant that the rights information would always be up to date. Here are several presentations related to the concept.

This is the mechanism ultimately employed by Creative Commons to allow authors to attach licenses to their work (and there is a CC declartion in RSS). It is also, belatedly, how other standards bodies, such as Dublin Core, have been approaching rights declarations. To be sure, there is still a large contingent out there that things rights information ought always accompany the object (to make the object 'portable' and 'reusable'). It is, again, this old idea that everything there is to know about an object ought to be in the object. But 'rights', like 'knowledge', are volatile. A resource (such as an Elvis recording) might be worth so much one day (Elvis is alive) and twice as much the next day (Elvis is dead). The owner of a Beatles recording might be Paul McCartney one day and Michael Jackson the next.

The combination of resource profiles, syndication, and distributed metadata gives us the model for a learning resource syndication network. Here are the slides describing the network and the paper. This is what we had intended eduSource to become (unfortunately, people with different interests determined that it would go in a different direction, leaving our DRM system a bit of an orphan - and eduSource, ultimately, a failure). But if we look at the RSS network (which now comprises millions of feeds) and the OAI/DSpace network (which comprises millions of resources) we can see that something like this approach is successful.

That's where we were at the end of the eduSource project. But the million dollar question is still this: how does your content network ensure that the right resource ends up in the right hands?

And the answer is: by the way the network is organized. That - the way the network is organized - is the core of the theory of learning networks.

But what does that mean?

Back in the pre-history of artificial intelligence, there were two major approaches. One approach - called 'expert systems' - consisted essentially of the attempt to codify knowledge as a series of statements and rules for the recovery of those statements. Hence, rule-based AI languages like LISP. The paradigm was probably the General Problem Solver of Newell and Simon, but efforts abounded. The expert system approach brought with it (in my view) a lot of baggage: that knowledge could be codified in sentences, that thought and reasoning were like following rules, that human minds were physical symbol systems, that sort of thing. (This approach - not coincidentally - is what the Semantic Web is built upon).

The other approach, advocated by Minsky and Papert, among others, was called 'connectionism'. It was based on the idea that the computer system should resemble the mind - that is to say, that it should be composed of layers of connected units or 'neurons'. Such a computer would not be 'programmed' with a set of instructions, it would be 'trained' by presenting it with input. Different ways of training neural nets (as they came to be called) were proposed - simple (Hebbian) associationism, back-propagation, or (Boltzman) 'settling'. The connectionist systems proved to be really good at some things - like, say, pattern recognition - but much less good at other things - like, say, generating rules.

If we look at things this way, then it becomes clear that two very distinct problems are in fact instances of the same problem. The problem of locating the right resource on the internet is basically the same problem as the problem of getting the question right on the test. So if we can understand how the human mind learns, we can understand how to manage our learning resource network.

Connectionism says that "to learn that 'x is P' is to be organized in a certain way", to have the right set of connections. And if we recall that "A piece of knowledge isn't a description of something, it is a way of relating to something. My 'knowing that x is P' is not a description of 'x', it is a description of 'the relation between me and x'" it becomes evident that we're working on the same theory here. The problem of content organization on the internet is the same as the problem of content organization in the brain. And even better: since we know that 'being organized in a certain way' can constitute knowledge in the brain, then 'being organized in a certain way' can constitute knowledge in the network.

Connectionism gives us our mechanics. It tells us how to put the network together, how to arrange units in layers, and suggests mechanisms of interaction and training. But it doesn't give us our semantics. It doesn't tell us which kind of organization will, successfully produce knowledge.

Enter the theory of social networks, pioneered by people like Duncan J. Watts. In the first instance, this theory is an explanation of how a network of independent entities can become coordinated with no external intervention. This is very important - a network cannot produce knowledge unless it itself produces knowledge, for otherwise we have to find the knowledge in some person, whcih simply pushes the problem back a step. Networks organize themselves, Watts (and others) found, based on the mathematical properties of the connections between the members of the network. For example: a cricket will chirp every second, but will chirp at an interval of as short as 3/4 of a second if prompted by some other cricket's chirp. provided every cricket can hear at least one other cricket, this simple system will result in crickets chirping in unison, like a choir, all without any SuperCricket guiding the rest.

Similar sort of phenomena were popularized in James Surowiecki's The Wisdom of Crowds. The idea here is that a crowd can determine the right answer to a question better than an expert. I saw personally a graphic example of this at Idea City in 2003 (they don't let me go to Idea City any more - too bad). The singer Neko Case asked the crowd to be her chorus. "Don't be afraid that you're out of tune," she said. "One voice is out of tune - but when 300 voices sing together, it's always perfectly in tune." And it was. The errors cancel out, and we each have our own way of getting at least close to the right note, with the result that all of us, singing together, hit it perfectly.

So knowledge can be produced by networks. But what kind of networks? Because everybody knows about lemmings and mob behaviour and all sorts of similar problems - 'cascade phenomena', they are called in the literature. They are like the spread of a disease through a population - or the spread of harmful ideas in the brain. This is where we begin with the science of learning networks.

The first part of to combine the science of social networks with the idea of the internet and metadata, which was done in papers like The Semantic Social Network. Thus we have a picture of a network that looks like the social networks being described by Watts and Surowiecki. These have been (badly) implemented in social network services such as Friendster and Orkut. To make this work, a distributed identity network is required. This was developed as mIDm - here and here - today, a similar concept, called OpenID, is in the process of being implemented across the internet.

Another part was to provide a set of design principles for the creation of networks that will effectively avoid cascade phenomena. Drawing for the earlier part of our work, including ideas such as distributed metadata, a theory of effective networks was drafted. Slides and Robin Good's nicely illustrated version of my paper. The proposal here is that networks that exhibit the eight principles will effectively self-organize (and this is a very rough rule of thumb, intended to cover for mathematics which might never be possibly solved - very very simple examples of these sorts of organizing principles are seen in things like 'the game of Life' - because the phenomena being described are complex phenomea (like weather system or ecologies) with multiple multrually dependent variables).

Adding to this was what I called the 'semantic principle', which is our assurance that the forms of ornagization our networks take will be reliable or dependable forms of organization. The epistemology of network knowledge is described in detail in my paper An Introduction to Connective Knowledge and Learning Networks and Connective Knowledge.

On the technical side, my main attempt to instantiate these principles is embodied in my development of Edu_RSS. I am currently migrating Edu_RSS from the NRC server to my own server, as directed. The idea behind Edu_RSS is that it harvests the RSS feeds of roughly 500 writers in the field of online learning, combines these feeds in different ways, and outputs them as a set of topical feeds. The system also merges with my own website and commentary. the idea is that a system like Edu_RSS is like one node in the network - ultimately, layers of the network will be created by other services doing much the same sort of thing. For a description of edu_RSS see here.

Very similar to EduRSS in concept and design is the student version of the same idea, generally known as the Personal learning Environment. The PLE differs from EduRSS in that it depends explicitly on external services (such as Flickr,, Blogger and the like) for data retrieval and storage. The 'node in the network', with the PLE, is actually virtual, distributed over a number of websites, and also very portable (ideally, it could be implemented on a memory stick). I am working on the concept of the PLE both by myself and with external organizations.

Again, the idea behind these applications is to bring some of the threads of this whole discussion into convergence - distributed metadata, content syndication, distributed rights, identity, data, autonomy and diversity of perspective and view, multiple simultaneous connections creating 'layers' of interconnected individuals, and the rest.

The purpose of the Learning Networks project, over and above the theorizing, is to build (or help build) the sorts of tools that, when used by largish numbers of people, result in a self organizing network.

The idea is that, when a person needs to retrieve a certain resource (which he or she may or may not know exists) that the network will reorganize itself so that this resource is the most prominent resource. Such a network would never need to be searched - it would flex and bend and reshape itself minute by minute according to where you are, who you're with, what you're doing, and would always have certain resources 'top of mind' would could be displayed in any environment or work area. Imagine, for example, a word processor that, as you type your paper, suggests the references you might want to read and use at that point. And does it well and without prejudice (or commercial motivation). Imagine a network that, as you create your resource, can tell you exactly what that resource is worth, right now, if you were to offer it for sale on the open market.

That's what I'm working on. In a nutshell.

Wednesday, March 28, 2007

Transformed Teaching and Learning

Responding to Brian Lamb, who asked for "your own perspective, however brief."

Very briefly, because I don't have a lot of time...

* What does transformed teaching and learning look like?

It is directed by the learner, rather than the teacher or administrator. It ceases to the the focus of activity, and becomes a support for whatever is the focus of activity. It creates empowerment, rather than dependence.

* What are the key components needed to effect this transformation?

Attitudes, mostly.

Things that allow people to direct their own learning and create their own resources. Things that allow these resources to be located wherever they are needed (ie., ubiquitous internet + resource syndication). Placing control (and hence power) in the learner's hands - eg., personal identity, not institutional identity; personal resources, not institutional resources; etc.

* How do we build these key components and connect them?

We don't.

If we absolutely must build something, we build tools that allow people to create and build and store and syndicate. Then we give these tools to the people, making them very portable, rather than trying to establish a (proprietary, branded) web presence.

When we are building other things (such as games or EPSS, etc) we create opportunities for student-directed learning to be placed within the activity environment.

We allow simple grass-roots standards (and tools and computer languages) rather than trying to engineer a perfect solution to foist on the masses.

We continue to lobby for free and open software and resources rather than trying to create something *called* 'open' which nonetheless requires payment (either directly, via fees or subscriptions, or indirectly, via membership fees or tuitions).

Monday, March 26, 2007


I have always had a fondness for birds.
This is one of the most beautiful birds I've seen:

From Grey Matters.

Should Elsevier Journals Be Boycotted?

Responding to John S. Wilkins, who writes:
Grrlscientist just pointed out that MDs are threatening to boycott The Lancet, because Reed Elsevier, the publisher, supports weapons fairs, including manufacturers of cluster bombs.

This is a worry. Elsevier publishes around 40 journals that have a philosophy component. Perhaps philosophers, who are after all supposed to be consistent on principles, should also boycott those journals. I list some of the major ones under the fold.

'Consistency' is medical practitioners refusing to support, either directly or indirectly, products or services that unnecessarily inflict injury or death.

Philosophers - as is evident from the discussion above - are not bound by the same constraint, the discipline having no inherent stance with respect to unnecessary injury or death. Certainly, some of the people commenting above would have to be put down as 'pro', given their defense of weapons that cause unnecessary injury or death.

The Lancet's sponsoring of weapons fairs betrays a larger concern, however, and that is the promulgation of a publishing culture that has as its primary (indeed, only) value the making of money. This is evident in Elsevier's pricing policies and its evident unwillingness to allow anyone but the moneyed elite access its wares. The people who cannot afford the journals (not coincidentally the same peoplke who are victims of cluster bombs) be damned!

Presumably philosophy does have an inherent interest in something other than the making of money, though you would never know these days. Certainly, anyone with a moral stance ought to be looking at how knowledge - whether military, medical or philosophical - is created, for what purpose, and who benefits.

Any time left over can be spent helping the human wreckage wrought by the philosophy that allows a publisher of philosophy to be hip-deep in the trade of weapons of mass destruction.

Saturday, March 24, 2007

The Simple Test and Complex Phenomena

Written after taking the test described by Will Thalheimer. If you want to try the test for yourself, try it here. Via Marc Oehlert, who says, "Honestly, your score on this is probably as good an indicator of your performance in this field as any certification program going." Either he thinks very poorly of certification programs, or he did not read Thalheimer's analysis.

Well, I got 2 out of 15 correct. That is substantially worse than the average, which is, as you (Thalheimer) point out, barely above what they would get from pure guesswork.

(Actually, the 32 percent is about exactly what you would expect. It's an old adage among trivia game players: 'when in doubt pick 3' (ie., C, the middle response). And this quiz fits true to the pattern: A was the correct response 2 times, B 4 times, C 6 times, D 2 times, E none, and F once.)

All this is a round-about way of saying: have you considered the possibility that it's the quiz that's the problem, not the quiz-takers?

I mean - I went into the test with the expectation that I might not do well. I have a healthy doubt of my own abilities. But I am not a 2 out of 15 in my own field. That's an unreasonable result.

There is, in my view, a systematic flaw in this test. And it can be expressed generally as the following:

The test author believes (based on some research, which is never cited) that "Learning is better if F" where 'F' is some principle, such as "Performance objectives that most clearly specify the desired learner behavior will produce the best instructional design."

This principle is treated as linear. That is to say, the more the principle is exemplified in the answer (per the author's interpretation) the more learning will be better.

But these principles are not linear. There is a point of diminishing returns. There is a point at which slavish adherence to the principle produces more problems than good. Experienced designers understand this, and hence build some slack into the application of the principles.

Question 1 provides a good object lesson:

The feedback states: "Performance objectives that most clearly specify the desired learner behavior will produce the best instructional design."

Option B (which I selected) is: “As each web page is developed, and after the full website is developed, each web page should be tested in both Netscape Navigator and Internet Explorer.”

Option C (which is considered correct) is: Same as B, with the addition of the following: “One month after completing the training, learners should test each web page during its development at least twice 90 percent of the time, and test each web page once after the whole website is complete at least 98 percent of the time.”

Now the question is, is the performance objective "more clearly stated" in C than in B? According to the author (obviously) it is. But sometimes making things more precisely stated does not make them more clear. It does not even make them more precise.

Which is clearer:

a. Test the page after design

b. Test the page 98 percent of the time after design

In my view, (a) is clearer.

Moreover, (b) is no more precise than (a). Because what (a) means is "Test the page 100 percent of the time after design".

Therefore, it would be unreasonable to select (c) on the ground that it is clearer. The unthinking effort to make it more precise went over the top and resulted in a statement that is more an example of nonsense than clarity.

The entire test is constructed this way. I got a couple where it was pretty obvious what the examiner was looking for. But otherwise, I picked what I felt was the best answer, which in every case was the less extreme version of the over-the-top choice.

In question number 2, for example, the principle is: "When the learning and performance contexts are similar, more information will be retrieved from memory."

Well, this is generally true. But will somebody prepare better spending a week on the road, living in a hotel, unable to keep up with work at home in Boston or to be there to help the kids? Being on the road creates an impact. So even if the test is being conducted in San Francisco, the comes a point where the advantage of studying and testing in a similar environment is overwhelmed by the disadvantage of being on the road.

The test author created an extreme case - a test location in San Francisco instead of a test location in downtown Boston. Thus, complications that an experienced person would automatically take into account - the time lost in airports, the rigors of travel, etc. - are built into their thinking.

The only way to get through such questions is to be able to figure out what the author is looking for. In this case, I looked at the example and it was pretty clear that it would be based on 'similarity of environment' and not any real question about 'effective learning'. It was one of the two I got right.

But author's intention is very deliberately disguised throughout the test. Or more accurately, the test addresses such a specific context that only people who work in that specific context have any real chance of divining the author's intent (and as it turns out, the context was so narrow it didn't even show up statistically).

This, I think, is one of the problems of testing generally, and not just this test in particular.

In a test like this, each question is designed to measure only one point of learning (more precisely: to measure responses only along one vector). Theoretically, you could have questions that measure more than one vector, but it results in confusing questions and too many possible responses.

If the test measures simple things, that's fine. The question of whether 2+2=4 is not going to be impacted by external considerations.

But if the test measures complex phenomena, then it is going to systematically misrepresent the student's understanding of the phenomena.

Specifically, a very simple one-dimensional understanding will fare as well (and in this case, better) than a complex, nuanced understanding. People who understand a discipline as a set of one-dimensional principles will do the best - understanding simply becomes a case of picking which principle applies, then selecting the example that fits the best.

This test fails because it is too narrowly defined to let the simple understanders spot the principle being defined, and too dependent on single principles to give people who genuinely understand the phenomena any advantage.

The test author is right: don’t trust gurus.

Unfortunately, the test author didn't consider the possibility of recursion.

Thursday, March 22, 2007

Semantic Web - Some Responses

Numerous good responses to my post from a couple of days ago - and in this post I offer some responses, framed around the argument in this post from David Norheim:

OK, let's deal with these in order...

On the technical side

* First of all W3C RDF does not require that everyone adopts the same vocabulary for a domain.

Quite right. That is the huge advantage RDF has over plain XML. However, in order for RDF to be useful, different entities must adopt *some* vocabulary.

For example, if you are dealing in furnture, you need to define a furnture vocabulary, so you can use tags like furn:name and furn:size. For each of these tags, too, there may be a canonical vocabulary. For example, furn:name must be one of {couch,table,chair, etc}.

Now if every furniture business uses furn:title as planned, there's no problem. But what happens is that each enterprise defines the furniture domain differently. Some people want to include 'sofa' as a value in furn:title, others want to include furn:settee. Each item here conferes business advantage on one of the other. So no vocabulary is ever define, or worse, you get conflicting vocabularies using the same name different ways.

Every day in my inbox I see more examples of this. ODRL vs XrML. The various IM specifications. DC vs LOM vs IEEE-LOM. RSS vs RDF vs Atom. And more.

* RDF makes it trivial to publish data in which you mix vocabularies, making statements about a person, for example, using terms drawn from FOAF, Dublin Core and others

Quite right. But what has tended to happen is that people prefer to use only one vocabulary. They don't like mixing and matching.

And in any case, this doesn't solve the problem. People don't put two versions of the 'title' element in a single document, they pick one. You'd never see an RSS-type document with:
So we need to know (via crosswalks) that what atom means by title is the same as what DC means by title.

Except, of course, it doesn't. Even dc:title means different things to different people. I have in my inbox right now an email in which the proposed vocabulary for DC elements is being rejected because they are language-based, not concept-based. Now you can muddy your way through such arguments. But they are endless, and nobody ever compromises.

* RDF is showing increasing adoption, showing up in products by Oracle, Adobe and Microsoft, for example.

The links in this point all point back to my post.

Anyhow, I haven't seen any evidence of increasing adoption.

Putting on my best movie voice: But you see Al, it isn't the production of RDF that's the issue. Anybody can produce RDF. It's the *reading* of RDF that's the issue. And nobody reads each other's RDF.

* RSS, ATOM and iCal are examples for data standards jointly supported by different companies - there’s just no reason to assume that this list cannot grow.

Neither RSS not Atom are RDF (except for RSS 1.0, which has a usage of about 3 percent). I also posted figures on my website just this week showing that iCal usage is something like 7 percent. iCal isn't RDF either - hence the need for a converter and the resulting profiferation of RDF versions of iCal, none of them official. Meanwhile, neither Google nor Outlook are based in iCal.

Bottom line:

Technically, we *could* all agree on standards and vocabularies. But, empirically (that is, looking at actual technical implementatiosn, we *don't*. And that is really what matters, isn't it?

* People are looking for incentives to share. Why do you always have to look at the big corporations? Governments (at least in Europe) have self interest in publishing (semantically clear) information to make its own government more efficient and its customers (corporations and people) more competitive. Expect more from them. Small companies have incentive to bring down the bigger ones.

In general, you have an incentive to share when (a) you're the smaller fish, and (b) your intention is to provide a public service.

Incentive (a) doesn't help us a whole lot, because, mostly, big fish beat small fish. Yes, there are exceptions - Google's rise from nothing being the most notable. Buit you can count them on one hand. Mostly, when small fish begin to get big, they are swallowed by big fish. Like Flickr. Then any commitment they had toward sharing becomes a commitment to Yahoo's version of the standards.

Incentive (b) works in a climate where there is a robust public service, either provided by government, or provided voluntarily by the general public (eg. open source).

The robustness of the public service is being challenged these days. Not only are companies pressing to force government to withdraw from providing services (eg., the anti-BBC lawsuits, and the anti-community networking lawsuits) there are pressures within government to tailor services to the needs of companies. So, for example, one company would have privileged access to government information, and it doesn't matter what format it's in then. Yes, there is a spirited campaign to oppose this - but the successes of that campaign - eg., vs. CSPAN's recent declaration of copyright - are rare enough that they're news.

Meanwhile, the general public is volunteering itself away from the semantic web and toward things like Web 2.0, AJAX, JSON and a host of patchwork solutions. The reason for this is that the semantic web (mostly at the request of business, ironically) has become so bloated it's too cumbersome to use. And also, the businesses (and academics and governments) that are developing it aren't using it.

* Businesses do cooperate, when they see it as being in their own interests. In fact commerce can only function when businesses work together at their interfaces. Money is a shared vocabulary with a set of standard protocols. Kendal Clark elaborates on this in a separate post

Yes, business cooperates. But:
- these instances are contingent on continued mutual benefit. Companies can and do pull the plug on each other.
- a lot of this cooperation is in order to stiff some third company, which will be locked out. If you subscribe to the e-learning trade press, you'll see an endless stream of 'strategic alliances'. That means X is aligning with Y in order to prevent Z from doing business with Y.
- they aren't honest about it. They may appear to be cooperating, but then at the last minute they'll pull out their submarine patents and torpedo the works, traing to lay exclusive claim to a domain built by a number of partners. OASIS exists just to make this possible (because W3C wouldn't allow it (which is why the businesses don't really support W3C's efforts)).

* Another argument comes from Aditya Pandit where he argues with that the innovation and adoption comes not from the large corporations (The Big Players behave to retain the advantange that they have) but from start-ups (ref. MySpace, YouTube, Yahoo, Google). So looking at adoption by the big players is really incorrect. This should be common knowledge from innovators and startups.

Right. The big players don't innovate.

But they steal.

Any technology company that starts from scratch in today's environment is copied almost as soon as it becomes successful (or purchased outright, but that's a separate story).

In order to be successful, you have to be successful so quickly (or so underground) that the bigs have no choice but to go along. And even then, they'll do it only reluctantly, and they'll try to subvert it.

It's way too late for that to happen with RDF. No start-up is going to come along and make (certain flavours of) RDF the standard. There's simply too much instant competition from the bigs.

And besides - if you out-innovate them, and out-grow them, they'll just slap a bogus patent claim on you.

* I think what Downes says is colored by a very skewed “free market”-American (I know he is Canadian)

What I am skewed by is the rampant avarice and dishonesty shown by the business sector (it also exists in the public sector, I'm not letting them off the hook, but there's less in the public sector, and the public sector is smaller).

If you read my writings, you would see that, with some few exceptions, I do support free markets (and the exceptions are the well-defined cases of market failures, generally caused by shortages or excesses of production).

But it's important to understand that "pro-business" and "pro-free markets" are not synonyms. The first instinct of any business is to attempt to subvert the free market in order to obtain an edge or, ideally, a monopoly.

That is why they play games with the standards - they are trying to subvert the process to their own advantage. Often, this involves subverting standards committees (like, say ISO) to make their own proprietary technology the standard (like, say , MPEG-REL).

* ... view that the market is best of with competing standards , ref CDMA/GSM, the banking system etc. And let the companies compete freely.

Both the telephone industry and the banking industry had to be regulated into submission before they would share.

Even now, mobile phones that work in Canada won't work in Europe. And just last week the legislation forcing phone-number mobility came into force. Meanwhile, we are looking at problems like net neutrality - as I speak, the phone companies are squeezing Skype bandwidth.

As for the banking industry, it wasn't until a few years ago that the banks would allow Credit Unions into the ATM networks. Moreover, there are competing interbank networks - Cirrus and Plus, for example, which is why my bank card doesn't always work in bank machines. Meanwhile, banks had a monopoly on cash dispensers until legislated into opening the standard and allowing 'white machines' to be installed.

Governments know well what would happen were the telephone companies and the banks allowed to compete freely. That is why they are two of the most heavily regulated sectors there are.

* Adoption of standards do take time…

Yes, it does take time. But alfter a certain amount of time, you need to realize, they're not going to do this voluntarily. At a certain point, if you're not going to legislate them into cooperation (and I am *not* advocating that in this instance, for numerous reasons), then you have to pull the plug.

Tuesday, March 20, 2007

Why the Semantic Web Will Fail

Don't get too excited by the title. But I do want to share a few thoughts...

It was running through my head just now, the work that we were doing here in Moncton to work on an e-learning cluster. Because I saw that 'cluster building' is still one of the major pillars of NRC's strategy, and I was wondering whether our work would ever be a part of that again.

And I was thinking about some of the things that didn't go so well in our first few years. Some companies went under - a couple, before we even talked to them, another, after we were in a project with them. And then there was the company that we sat down with, oh in 2002 or 2003, and laid it all out - RSS, content syndication, social networks. The whole Web 2.0 thing.

And they weren't interested. And in less than a year, they were gone.

And I thought about where we're right today and where we might be wrong, and why. Because despite having a pretty good track record (check for yourself, it's all on the public record - this year's predictions (bucking everyone else) include OpenID and the runaway success of Wii).

And I'm saying the semantic web won't work. Can't work.

But how do you explain that intuition?

And I was thinking about the edgy things of Web 2.0, and where they're working, and more importantly, where they're beginning to show some cracks.

A few of key things today:

- Yahoo is forcing people to give up their Flickr identities and to join the mother ship, and

- MySpace is blocking all the widgets that aren't supported by some sort of business deal with MySpace

- the rumour that Google is turning off the search API

And that's when I realized:

The Semantic Web will never work because it depends on businesses working together, on them cooperating.

We are talking about the most conservative bunch of people in the world, people who believe in greed and cut-throat business ethics. People who would steal one another's property if it weren't nailed down. People like, well, Conrad Black and Rupert Murdoch.

And they're all going to play nice and create one seamless Semantic Web that will work between companies - competing entities choreographing their responses so they can work together to grant you a seamless experience?

Not a chance.

Now - there are many technical reasons why I think the Semantic Web is a loser, along with some cultural and philosophical reasons. Namely: the people who designed the Semantic Web never read their epistemology texts.

But the big problem is they believed everyone would work together:
- would agree on web standards (hah!)
- would adopt a common vocabulary (you don't say)
- would reliably expose their APIs so anyone could use them (as if)

Shall I go on?


Maybe we won't be building clusters in Moncton, maybe we will. I don't know - I'd like to keep trying. Maybe people will listen to us or maybe (more likely) they won't.

The future is not in the Semantic Web (or in Java, or in enterprise computing - all for the same reason). Careers based on that premise will founder. Because the people saying all the semantic-webbish things - speak the same language, standardize your work, orchestrate the services - are the people who will shut down the pipes, change the standards, and look out for their own interests (at the expense of yours).

I don't trust any of them. Not even as far as I could throw them. Because I know they'd sell me down the river in a minute, if it meant one iota of business advantage. You know this too.

Yeah - we'll play games on Yahoo, create a not-too-serious blog with Google, post some tunes on MySpace (under an alias of course), and mess around with some photos on Flickr.

And we'll even go along with some unimportant things, like the university account and email, so we can access the course notes on Blackboard. The personal email address, that we got from our ISP, we will tell only to our closest friends - and we'll use the gmail account for logons and the Yahoo identity for spam.

We'll post to these Web 2.0 sites, but if the content means anything, we'll keep a copy on our computer as well (until Windows crashes and eats all your data, that is).

But trust them? Not a chance.

The future of the web will be based on personal computing.

Not because everybody in the world is some sort of Ayn-Rand-clone backstabbing money-grubbing leech.

But because there's just enough of them - and they're the ones who tend to rise in business. And when they say "give me your data" (or "let me manage your money" or "base your career on my advice") it's merely a prelude to their attempting to take you to the cleaners.

If my online world depends on them - and in the Semantic Web, it would - then my online world will fail. Will be a house of cards that will eventually collapse.

Yeah - I know. It's not a technical argument. And it probably reveals some of my own biases. But I can't shake the intuition that I'm right here.

(Update - Mar 21 - fixed a couple of typos and added the link)

Friday, March 16, 2007

Let Elizabeth Speak

I am not a member of the Green Party. I have been but not since 1980. Right now I am not a member of any party, but if I were, I would be a member of the NDP, the party I have been, off and on, a member of since 1980.

That long introduction notwithstanding, I am adding my voice to those throughout Canada who are calling for democratic election debates. The Green Party is a national party, they have nominated candidates across the country, they collect a decent vote - they are, in short, everything you would want from a national party. And more.

In the next election, let Elizabeth May (or any other leader of the Green Party) speak. There is no justification for continuing to keep Canada's fifth party silent. And if you agree with me (and are Canadian), sign the petition. I did.


This is a short clip from a comment that I didn't finish, [partially because I don't have the time, and partially because I would like to engage this topic from a different direction.

There's a lot going on here and I probably can't cover all of my disagreements in one post.

At the core, I think, is that you continue to assign agency to groups. You represent groups as doing things that (in my opinion) groups are not capable of doing. You are (again in my view) confusing between sentences that can be used descriptively and sentences that can assign agency.

For example: suppose we see a flock of geese come in for a landing on the lake. We would typically say, "the flock of geese landed on the lake." This is an accurate statement, because it describes what happened. But when we look at the same example, we might also be tempted to say, "the flock of geese decided to land on the lake." Now we have committed an error.

A flock of geese isn't the sort of thing that can 'decide'. The capacity to decide depends on having a mind, and a flock of geese does not have a mind. A flock of geese consists only of geese, and while it may be true that individual geese have minds, it does not follow that the flock has a mind. What in fact happened is that each individual goose decided to land. We observed this and interpreted it as the flock deciding to land.

In the same way: you say "
I also feel that there is 'group knowledge' that is outside of the individual." And "there are at least two levels of "knowledge" in any group, the one that the group as a whole has constructed and the one the individual has constructed." Here again, you are moving from a description to an assignment of agency. When we say, "there is group knowledge", that is like saying, "the geese are landing". But when we say "the group constructs knowledge" that is like saying "the geese decided".

Wednesday, March 14, 2007

Networks, Communities, Systems

I think this is a very interesting post.

Yonkers writes, "I think most educators focus on teaching students networking so that students can then move into communities of practice that will turn into systems."

Except... they don't.

Most educators are stuck in their own systems." Quite right. And that's where they try to put the students. Without all this networking nonsense at the front end.

I look at the sequence described - networks -> community -> system - and what I see is something that works breaking down into something that doesn't. I see an effective decision-making mechanism being subverted and employed in the service of a minority, usually to the detriment of the whole.

Yonkers writes, "At some point, however, a community is developed. This community connects on a social as well as cognitive level." I would write "emotional" rather than "social", but it's close enough.

The community also begins to establish which knowledge is important to function within that community and there begins to be more group processing of the “community” knowledge in order to access the group knowledge that are within community members’ networks."

No. This is a fallacy.

'The community' is not an agent. It does not have an independent existence (not even if we create fictions of such existence, such as the declaration that a 'corporation is a person').

Only individuals in a community have agency. Which means that we need to look very closely at what happens when someone says "the community
begins to establish which knowledge is important." What this means is that some few members of the community undertake this action, and are then in some way able to impose this as a directive on the community as a whole.

We need to distinguish between two senses if 'becomes important' here:

1. The sense in which the phrase is descriptive, an emergent phenomenon, that we are able to identify after the fact, and

2. The sense in which th phrase is normative, an individual action, which becomes definitive of membership or good conduct in the community.

The first is very easily established via a network. But the second requires a somewhat more cohesive and restrictive organization, which requires an injunction on individual freedom of action.

When somebody says a network "isn't sufficient" I always look to see what it is that the network is deemed to be insufficient for. And on analysis, it is always some stipulation - some custom, value, belief or law - that one person wants to impose on another.

To my mind, the only impositions that can be justified are those that are necessary to counteract other attempts to impose one person's will over another, those, in other words, that preserve autonomy, diversity, openness and interaction.

Small Groups

Responding to Beth Kanter, who asked me for a comment, so...

Perhaps it works with your audience, but if it were me, my first reaction is: I hate small groups, I hate small groups, I hate small groups.

Although people say that small groups 'give everybody a chance to talk' what they *actually* do is serve to eliminate minority and dissenting opinion.

For example:

Suppose there are two options, (a) and (b). Suppose that 4 out of five people prefer (a), but on hearing (b) one of them will be convinced to switch to (b) (this is a *very* common situation).

You have 15 people. That means that at the start, 12 of them prefer option (a) and 3 prefer (b). After the discussion, 3 switch allegiance, so you have 9 people preferring option (a) and 6 preferring option (b). Almost an even split; certainly option (b) is a respectable alternative.

But imagine that instead we split into three groups of 5. Now in each group, four people prefer (a) and one prefers (b). Although one person is convinced, there's still 3 people that prefer (a). So the group moderator reports (a). The results come back from the groups: everybody prefers (a). The preference for (b) has been squelched out of existence.

But that's not all...

The division of people into small groups is almost never random. Often, group leader are assigned by the organizer. Even when groups form on their own, the group leader tends to be the person deemed most favorable to the organizer.

Now you have a situation where, even if more than half of the people have switched their allegiance to (b), the organizer, who is loyal to the original option of (a), will report (a). This completely subverts the will of those who preferred (b), and worse, leave the (b) supporters with no option, no access to the plenary floor (without 'causing a disruption').

I have seen small groups abused so regularly and so often I have some to conclude that when small groups are employed it is almost *always* about maintaining the power of the organizers rather than giving people a voice.

To me, 'giving people a voice' does not merely mean 'allowing them to speak' but also 'enabling them to be heard'. When somebody is shuffled off to the obscurity of a small group, that voice has been stifled, not empowered.

The use of small groups, rather than empowering people, instead elevates a few people - the 'representatives' - into super-voices, and by design silences all other voices (again, any dissent from the official report is 'disruptive').

There is yet another way in which small groups stifle dissent: and that is by the creation of an expectation of resolution.

I was at a meeting where a small group process was discussed just this week, that would take place in a school context. Like everything else in schools, the 'discussion' was being carefully regimented. Three hours were allotted, with the requirement that the groups "come to consensus" in that time.

In my experience, the only way to get people to arrive at a "consensus" on anything in three hours is to run roughshod over their right to voice their dissent. Perhaps a vote may be taken after three hours of discussion. But on nothing but the most trivial of issues should any group (of any sort of diversity) be expected to reach consensus.

What is happening, of course, is that a consensus will be 'declared' rather than reached. The time pressure and the peer pressure in the small groups (where supporters of a minority view will have been isolated from any others sharing that view) will force dissenters to 'go along'. In these exercises, to, there is nothing major at stake - why be a holdout, when the process appears to be so much more important than the result?

Finally, although it doesn't really come up here, I will point out that small groups are often used to ensure that a superiority of numbers conveys a strategic advantage. You see this at policy conferences, where concurrent sessions are held to discuss different issues. I often find myself wanting to comment on more than one subject, but find that because of the structure I can only address one thing.

I have nothing against games like this, other than a passing observation that they may feel a bit contrived. But I really dislike the small group process. Because the most disempowering thing you can do, in any setting, is to impose a structure that ensures that voices won't be heard.

Just my view.

The Mind = Computer Myth

Responding to Norm Friesen:

If you were to read all of my work (not that I would wish that on anyone) you would find a sustained attack on two major concepts:

1. The 'information-theoretic' or 'communications theoretic' theory of learning, and

2. The cognitivist 'information processing' physical symbol system model of the mind

These are precisely the two 'myths' that you are attacking, so I am sympathetic.

That said, I think you have the cause-and-effect a bit backwards. You are depicting these as technological theories. And they are, indeed, models of technology.

However, as models, these both precede the technology.

Both of these concepts are aspects of the same general philosophy of mind and epistemology. The idea that the human mind received content from the external world and processed this content in a linguistic rule-based way is at least as old as Descartes, though I would say that it has more of a recent history in the logical positivist theory of mind. Certainly, people like Russell, Carnap and even Quine would be very comfortable with the assumptions inherent in this approach.

Arguably - and I would argue - the design of computers followed from this theory. Computers began as binary processors - ways of manipulating ones and zeros. Little wonder that macro structures of these - logical statements - emulated the dominant theory of reasoning at the time. Computers were thought to emulate the black box of the human mind, because what else would that black box contain?

Now that said, it seems to me that there can't really be any denying that there is at least some transmission and reception happening. We know that human sensations result from external stimuli - sight from photons, hearing from waves of compression, and so on. We know that, once the sensation occurs, there is a propagation of signals from one neural layer to the next. Some of these propagations have been mapped out in detail.

It is reasonable to say that these signals contain information. Not information in the propositional sense. But information in the sense that the sensations are 'something' rather than 'something else'. Blue, say, rather than red. High pitched, say, rather than low pitched. And it has been a philosophical theory long before the advent of photography (it dates to people like Locke and Hume, minimally) that the impressions these perceptions create in the mind are reflections of the sensations that caused them - pictures, if you will, of the perception.

To say that 'the mind is like a photograph' is again an anticipation of the technology, rather than a reaction to it. We have the idea of creating photographs because it seems to us that we have similar sorts of entities in our mind. A picture of the experience we had.

In a similar manner, we will see future technologies increasingly modeled on newer theories of mind. The 'neural nets' of connectionist systems are exactly that. The presumption on the part of people like Minsky and Papert is that a computer network will in some sense be able to emulate some human cognition - and in particular things like pattern recognition. Even Quine was headed in that direction, realizing that, minimally, we embody a 'web of belief'.

For my own part, I was writing about networks and similarity and pattern recognition long before the internet was anything more than a gleam in my eye. The theory of technology that I have follows from my epistemology and philosophy of mind. This is why I got into trouble in my PhD years - because I was rejecting the cognitivism of Fodor, Dretske and Pylyshyn, and concordantly, rejecting the physical symbol system hypothesis advanced by people like Newell and Simon.

I am happy, therefore, to regard 'communication' as something other than 'transmission of information' - because, although a transmission of information does occur (we hear noises, we see marks on paper) the information transmitted does not map semantically into the propositions ecoded in those transmissions. The information we receive when somebody talks to us is not the same as that contained in the sentence they said (otherwise, we could never misinterpret what it was that they said).

That's why I also reject interpretations, such as the idea of 'thought as dialogue' or communication as 'speech acts' or even something essentially understood as 'social interaction'. When we communicate, I would venture to say, we are reacting, we are behaving, way may even thing we are 'meaning' something - but this does not correspond to any (externally defined) propositional understanding of the action.

Tuesday, March 13, 2007

Baudrillard's Terror

I sometimes wonder whether it's possible for a European to understand America.

You need to see America while driving a Ford Galaxie on an interstate at night, the neon glow of the city lights reflecting the low-hanging clouds on a sultry steamy summer evening. If you do not see this, if you cannot imagine it, you do not understand America.

And I’m driving into Houston on a rain slicked Texas road
Land so flat and sky so dark I say a prayer to float
Should all at once the Sanasito surge beyond its banks
Like Noah reaching higher ground I’d offer up my thanks

Cause I’m a stranger here No one you would know
I’m just passing through I am therefore I go
The moon rose in the east But now it's right above
As I say aloud Goodnight America

(Mary Chapin Carpenter)

Everybody understands that America is an illusion, a wonderland of pink flamingos and purple velvet.

But it's their illusion. It belongs to the people; it was created by the people. They do remember driving their cars on the interstate, listening to the DJ, their futures riding on four wheels and a dream. They made this; everything in America was made.

If the post-modernists were misinterpreted, it was because their interpreters believed that there could be an interpretation, some stance, other than a point of view, that represented their thought. If it were up to me, I would drop their books, shatter them into a thousand pieces, each a one-paragraph blog post, with no determinate start page, each reader entering from a different vector, each word meaning something different with each glint of the reflecting light.

If post-modernism had no impact, it's because everyone stood around, waiting for it to say something to do them, when all along the message was, that this is yours, to pick up, and create what you will. It is perhaps no accident that Baudrillard was from common parents; when you do not inherit your history you have to make it, and when you make it, it becomes yours.

Monday, March 12, 2007

Prosperity in New Brunswick - Part 1

As I listen to the radio the discussion is revolving around the recently terminated Halifax bid for the 2014 Commonwealth Games. As everybody knows by now, the proposal - roughly $1.7 billion - was substantially more than the city was willing to commit. Only $700 million had been pledged from other levels of government.

Now I have no opinion on the Games either way. I could take them or leave them. And it sounds reasonable to pull the plug, but as one caller commented the other day, $700 million just went out the door and down the road. Yes, Halifax would have gone into debt. But it does seem that the decision was based on narrow and short-sighted considerations.

I am reading the the New Brunswick Reality Report, a review of what our own province will have to do in order to achieve economic self-sufficiency. It talks about things like the need to achieve economic growth and to improve our labour pool. But the bottom line is this: "To retain or attract people, New Brunswick will have to offer a standard of living that is equal or superior to that of other jurisdictions." (part 1, p.8)

The report lists four elements to this: first, competitive wages and salaries; second, access to affordable housing; third, access to good education and health services; and fourth, a 'modern urban environment' including the amenities and opportunities cities bring.

New Brunswick is doing poorly on all four counts. It is worth pausing for a few moments to reflect on this.

That wages are lower in New Brunswick goes without saying. When I arrived here in 2001 I was shocked to find how little university professors were paid. No wonder they want to pack up their textbooks and leave! It is not only accepted as a matter of fact that wages are lower here, there are elements in society who want to keep it that way.

The local newspaper, for example, complains that Moncton's RCMP are paid more than regional police in other Atlantic Canada cities. But this is a good thing! We get much higher quality policing, and as a result, Moncton is wholly free of the random violence that has plagued Halifax (and its regional police). Yes, it costs more - but when we needed a helicopter to patrol for the recent Rolling Stones concert, we got one. That's what paying more buys you.

There is this 'traditional mindset' that holds us back on things like this. It's a mindset that says you should pay as little as possible for 'luxuries' like university professors and Cadillac police. It also contributes to problems in other areas.

Take housing, for example. We hear a low about how affordable housing is in the Maritimes. And it is - if you are buying a house. By contrast, however, if you are renting, housing is no more affordable than elsewhere. And your rental accommodations are most likely going to be substandard. This is the result of 'traditionalist' social and economic policies that favour home ownership. Thus, in New Brunswick, taxes on rental accommodations are twice those for single-family homes. That's why it's so hard to find a decent place to rent.

The same sort of thinking applies to education and health. We have voices in our community - such as the local newspaper - constantly arguing for lower funding. The Times & Transcript, for example, just this past week-end questioned whether we as a province could afford our universities and community colleges. This is exactly the sort of thinking that killed the Commonwealth Games bid.

It was an odd thing to be a recent arrival in this province and to be reading about schools that do not have gyms, to be reading about infrastructure that is literally falling apart, to be reading about unacceptable illiteracy levels. The anguish with which the province very reluctantly released funding for a basic facility as a cath lab at the hospital was astonishing. Even conservative Alberta spends top dollar on some of the finest medical facilities in the world. So what was happening here?

We keep hearing over and over, 'we can't afford it', that 'we are too small a province.' But it's simply not true. I lived more than four years in Manitoba, a province that has almost the same population as New Brunswick. Somehow, Manitoba could afford a lot of these things - major universities, health and education facilities, even an NHL hockey team. And we hear the plaint, "We can't afford it" whenever a CFL football team is brought up.

The infrastructure and services that are expected in other communities seems to be stymied by this sense that "we can't afford it" here. Little Brandon, Manitoba, population 40,000 can afford a quality bus service with proper routes, but Moncton struggles with its 50 minute service (thankfully, this is beginning to change). We cannot keep sidewalks plowed here, much less maintain walking and hiking trails. But these are the norm in other cities. Our city engineers' attempts to create bicycle lanes on residential streets are attacked by the newspaper. They should visit Amsterdam, where bicycles have their own roads!

The people who try to save a dime at every turn are squeezing New Brunswick (and Nova Scotia, even more so) dry. While they save every penny, they do not realize that the are creating a community and society in which living is hard and in which there's nothing to live for. Perhaps in the old days it was sufficient that the people would acquiesce to serve God, family and the Company. But people today are looking for more than that.

To be fair, things are changing. Things are getting bettter here in Moncton (thopugh evidently not in places like Halifax and Saint John). The McKenna government did things like build highways and wire schools. This gave us a transportation and communications infrastructure. The City of Moncton parlayed that into a new airport. We now need the infrastructure that supports a transportation hub - not just hotel rooms (though these are coming) but things like a convention centre and entertainment facilities. The sorts of things Halifax was poised to land - before they tossed the funders out on their ears.

I lived in Calgary when it was bidding for the 1988 Winter Olympics. Calgary was a much smaller city at the time, and there was a lot of concern that the city could not afford it. But the attitude was, "we will find a way to afford it." This resulted in a brand new areana (that now houses an NHL hockey team), enhancements to the stadium (where the CFL and university football teams play), light rail transit, the Olympic Oval (which is now producing all those speed skating gold medals), a ski-jumping facility, an entirely new ski resort (in Kananaskis) and more.

But the city didn't stop there. In front of its gleaming new city hall, also completed in time for the Olympics, it build a plaza for public gatherings, and beside that, the Calgary Centre for the Performing Arts right next to the enhanced Glenbow Museum. Across the street from these was placed the new Convention Centre - I have been there a number of times since, including for the city's successful 'Smart City' bid and follow-up conference. Success builds on success. That's why you find the money.

You know - over the last few months I have been talking about my theory of learning as pattern recognition and of education as practice and reflection, and somebody commented that while successful students understand this, unsuccessful students do not. They look at the successful students and comment that they were "just lucky" to get the grade they did. This attitude seems to permeate the Maritimes, the idea that places like Ontario and Alberta are "just lucky." And they do not see how the practice of being innovative is what makes you so.

One of the things that to me really spelled the bankruptcy of the previous government was that it did not even try to lure Research in Motion, which was seeking to open an Atlantic office, to New Brunswick. "We don't go after just any business that's out there," remarked Bernard Lord at the time. Maybe not - but it seems that these high-paying high-tech jobs are exactly what you would try to attract. So what would stop the government from even trying?

The only explanation that makes sense is that the government thought that these jobs were beyond its capacity to attract. That the province was simply not good enough to land high tech employees. That the brass at RiM would look at New Brunswick and ask about things like schools and hospitals and parts and exercise facilities and hotels and convention centres and the government would have no answer. No, these facilities were not in place. because we could not afford them.

When you look at the current government's Prosperity Plan, it is not clear that the lesson has been completely learned. Perhaps - as they're saying on the radio today - the government is trying to cater to those old and conservative people in rural New Brunswick.

Let's look at the proposed course of action in detail:

Post-secondary education and training - it is perhaps telling that the authors included 'literacy and numeracy' under this heading. But there is no vision here. I mean, it's not like no previous government attempted to improve literacy. What needs to be examined is the process and purpose of post-secondary education in the province.

The report reads as though the reason we need PSE is to train workers - to convert existing New Bruins. That is exactly wrong. The reason we need PSE is to provide opportunities for the children of people who move to the province. The purpose of the institutions is not to 'train' existing New Brunswickers, it is to attract new New Brunswickers.

Part of this whole 'traditionalist' way of thinking is that we do it ourselves, that the future is for New Brunswickers (whatever they happen to be) and not people 'from away'. This attitude must be changed. The emphasis on plans that favour existing New Brunswickers - whether they be preferential tuition rebates or programs to repatriate family members - must be eliminated. The focus must be on the New Brunswickers that do not yet exist - what will bring them here, what will keep them here.

This is part and parcel of this whole "we can't afford it" thing. It reflects a thinking that is entirely focused on the small population that currently constitutes Maritime society. Yes, this small population cannot afford it. This small Halifax couldn't afford the games - but a Halifax of a million people certainly could, and that should be the objective. And you know - when I think of Moncton, I am also thinking of the city of a million people. Not as some sort of indeterminate future. But as something I expect within my lifetime. Nothing less. Because it's not that there's a shortage of people in the world - just the will to attract some small percentage of them here.

Innovative equipment and processes - this is how the report is styling 'research and development', and while it should be no surprise to find that I support investments in research, I do not agree that our research should be focused on "value-added and technologically unique goods and services." This is based on a misunderstanding of how wealth is produced. Yes, the making of things (innovative or otherwise) produces wealth. But so does a multitude of other things.

It might come as a surprise, for example, to see the City of Moncton list among its revenue-producing activities the study of literature. But this is exactly the case, as the Northrop Frye festival attracts visitors to our city every year. Starving research in literature because it doesn't produce 'products' would be therefore to starve something that generates revenue - the best kind of revenue, intellectual products and services.

The report states, "Innovation comes from the application of new technology, new ideas and new processes." Quite so - but the real wealth is in new technologies, new ideas and new processes. Not the packaging and sale of them, not from the IP they produce, but from the mere fact that they exist in the community. Because - again - it is the existence of this level of development that will attract additional development.

There is a danger in attempting to define innovation as "unique" advantage. This suggests a parochial approach to research and development. By stressing things like IP and partnerships with local corporations you actually discourage additional development.

We already see an example of this in the business community. Existing companies in New Brunswick, such as the Irvings, attract significant support from the government, such as the property tax deal in Saint John (worth about $125 million - imagine what you could buy for that). because these local companies obtain such advantages, they are able to compete successfully against any outside company. Why would Shell or Esso or BP build gas stations in New Brunswick? It's pretty hard to compete against a company that is saving $125 million off the top.

You see this a lot in New Brunswick, from top to bottom. Companies parlaying political ties into economic advantage, which they use to keep the competition at bay - and out of the province. Only in some locations - such as Moncton - is this hold being broken. Imagine what it must have taken for Home Depot to finally convince itself it could compete with Kent on its home turf (and, note,. without that competition we would never have seen Kent lower its prices or even open on Sundays).

Rather than creating a 'unique advantage' for New Brunswick companies, we should be going out of our way to level the playing field. We should be saying this like, our research is available to everyone. Companies will figure out for themselves that moving to where the research is means that they hear about developments more quickly. That the advantage to being in Moncton isn't low wages or government handouts, but the stimulating atmosphere of research and innovation.

New business development - again, this smacks of that whole 'us against the world' attitude. As though prosperity will come from existing new Brunswickers creating products that they can export to the world.

Yes, developing exports will help an economy. But so will developing a domestic market. One of the reasons the United States can export almost at will is that products can be developed and sold at home first.

This is important because it is the key to leveraging any product or service that we do sell. If we are selling something strictly for an export market, we receive no advantage as a community. We don't save on shipping costs because we don't buy the product. We don't gain from inside knowledge because we don't use the product.

The companies mentioned in this section are information technology companies who are managing to sell their wares abroad. That's great - but where is the benefit local companies could be obtaining? These companies sell e-learning, for example - but where is the major consumer of e-learning products (such as an online university)?

The key here is to look at our strengths - and then to develop local markets for those strengths. These local markets will, in turn, become businesses that have an advantage world wide.

And this is a strategy that is more effective because it is open. It doesn't favour existing new Brunswick companies - it is something that could benefit any company moving here. Indeed, it become an attractor - by becoming a consumer of some New Brunswick good or service, the company increases its competitiveness by moving to new Brunswick (now imagine Bernard Lord being able to go to RiM and say "We will improve your competitiveness" (rather than "we will support your competition with subsidies")).

Target Large Corporations - sure, nothing wrong with large corporations, provided (a) they do not obtain unreasonable subsidies from governments, and (b) they do not successfully lobby for lower taxes. These are things that corporations, and especially large corporations, do to leverage their advantage in the marketplace. And governments need to be resistant to that.

Why? Because a corporation can move on after it has bled a city or a province dry, but the city or province cannot. Look at how resource companies sometimes operate in industry towns. What long-term advantage did Nova Scotia gain from its coal mines? What long-term advantage did New brunswick gain from its forests?

The relation between a company and its community must be understood as an exchange. The company will extract wealth from the community - it will take its coal and timber, its fresh water and its wind, it young talent and its government services. In return the community should demand a fair exchange. Infrastructure and services that can be built and maintained indefinitely into the future. The development of local talent and enterprise that will allow it to diversity. The renewal of renewable resources, and the replacement of resources that are not.

This may seem like a hard sell to large companies but its not.

No company other than a predatory fly-by-night will want to locate in a community that is allowing its assets to be depleted by its business community. Such companies look at such a community and conclude that not only is there no wealth to share today, but also that there will be no viable community in the future. To locate in such a community means replacing the assets that have been drained - and that's not a good business move.

If we provide Bell Canada with incentives to attract it to Moncton, we will land Bell Canada. But if we offer to be good customers, we will attract all three - Bell, Rogers and Telus. All of whom will have an incentive to invest in the community, to make it a bigger market, than to take what it can from the community and leave.

This is something that was not understood by the previous government. It felt that the route to prosperity lay through playing favorites and offering sweetheart deals. As the report notes, the NRC was intended to help develop technology clusters. This was killed by the previous government. The NRC, which should have been benefiting the entire community, was led, through a policy of developing patents and licensing technology, to favour only a few special relationships. Rather than create a cluster, the NRC tended to pit New Brunswick companies against each other. The NRC needs to encouraged to engage with the community, to pursue a strategy of sharing innovation rather than of hoarding it.

The report talks about the importance of leadership. It also states that "New Brunswickers should demand that both the federal and provincial governments double levels of investment in economic development." This is all very well. But it must be understood that the investment must be on effective economic development.

And in particular, this means that the investment must be on infrastructure, on developing the overall competitiveness of the province. It must be, in other words, on investments that every person and every company can use. On things that will benefit not only those people and industries that already exist in the province, but also those that have yet to move here.

This is the only investment that will have any yield. If the money is spent instead picking favorites, then any investment in one company will be offset by a reduction of expenditures by another company. Give Bell a $1 million incentive, and Tuelus drops $1 million from its allocations for future investment in the province. We gain very little, if anything, from directed investments.

That's why, ultimately, the expenditure on the Commonwealth Games would have been good for Halifax. Not because it particularly needed stadiums and sports facilities. But because it was an investment that would be enjoyed by any current and future resident of the city. It's the sort of thing you can mention when you're talking to the chair of Research in Motion. "Oh yes," you'll say. "That star athlete your company is promoting in its advertisements can be based at our swimming pool."

So let's look at the realities:

1. We need to increase our population. That means we must attract people who are not yet here.

2. We must be prepared for sweeping changes. That means that the politics of special favours and influence must end. No more special deals.

3. We need to increase labour productivity. That means shifting our economy from an emphasis on producting things to an emphasis on producing ideas.

4. We need large scale investments in infrastructure. That is not a way of connecting rural and urban - it is a way of strengthening our cities. Period.

5. Exports? No. Exports are a part of a strategy - but we also need to learn how to gain advantage from our local production. If nobody grows trees like New Brunswick - then we should be selling them to New Brunswickers.

6. We need to expand our corporate base. We need to level the playing field and stress the viability of our communities and not the size of our handouts.

7. Leaders must step forward.

Thursday, March 08, 2007


Response to Amy Gahran.

Off the top of my head I can name dozens of people who are performing heroic journalism for niche markets. The vast majority of them are not of the crackpot ilk, but rather, have established a loyal readership by being steady, reliable and informed commentators. I consider myself to be one of them.

Yes, the costs are not zero. So, strictly speaking, the media revolution is limited to those with the means. But what is different now is that wile, in the past, only large media outlets had the means, today, millions upon millions have the means. And yes, it costs me money to produce what I produce. It's a net loss proposition. But you know - so what?

Alex Dering takes pains to point out to us that people cannot afford computer access, that people are going hungry. "There are people going hungry right now, and there are people going hungry without internet access, too. Got that? Good." Quite so. But hunger - which kills thousands of people a day - is pushed off the front pages by things like Paris Hilton and Anna Nicole Smith.

The stuff most of these niche journalists produce never makes the front pages. I write about online learning, which a stated objective of providing a free education to every person on the planet. In the commercial press, such an objective is almost treason.

Amy Gahran talks about trust - "people really do want to know what information and which sources they should trust." We all know that they cannot trust the professional media, which demonstrates over and over again that it is open to the highest bidder (and today, while organizations at the Times practice their mea culpas, the entire nation of Iraq suffers an ongoing agony as a result).

But people do not live in a vacuum. They form networks of trusted individuals - called, variously, friends or colleagues - who pass on good sources of information. The good online sources do not need to market themselves.

And *this* site has a 2000 character limit in comments. What are they scared of? Me?! (And it can't count to 2000 correctly either)

Wednesday, March 07, 2007


Paul Anderson writes, " ...there is a distinction between a folksonomy (a collection of tags created by an individual for their own personal use) and a collabulary (a collective vocabulary)."

He expands, " We are also beginning to see compromise solutions known as collabulary in which a group of domain users and experts collaborate on a shared vocabulary with help of classification specialists."

Beth Kanter picks up on this and observes, "The... point makes we wonder about the difference in terms of behaviors and values in tagging communities versus crowd filtering communities (e.g digg)."

I noticed this as well and wondered about it.

It is important to distinguish between a network behaviour, such as the folksonomy as described above, and a group behaviour.

A collection of tags may be created in two very distinct ways:

1. people, working independently, just happen to use the same word to describe the same resource

2. people, working together, agree on a term that describes a given (type of) resource

Method number (1) is a folksonomy, and it is a network behaviour. It does not involve collaboration of any sort.

Method number (2) is not, strictly speaking, a folksonomy. It is a method more common to librarians and taxonomers.

We have seen, however, efforts made to organize tags (people will write, "Everybody tag this event 'OCC2007' or whatever).

This sort of organization is arguably no longer a folksonomy, as some people are using a privileged position to instruct other people how to tag (I discuss this in my paper here: )

I would not go so far as to use a word like 'collabulary' - that is a ridiculous word, and is not needed to describe something that we already have perfectly good words for, a 'taxonomy' or a 'vocabulary'.

And the author's suggestion that folksonomies ought to be recognized as 'collabularies' is, in my view, a mistake: it either misrepresents what a folksonomy is, or it uses a new word needlessly.

A community of individuals working independently, such as Digg users, is not collaborating. The rankings are not the result of group action. Rather, each person works independently.

Indeed, it is worth pointing out that when Digg members collaborate, the system is deemed to be broken and the reliability of the rankings cast in doubt. And interesting debate surrounds edge cases (such as the case where one person sees that another has Dug a resource, and, trusting the other person, Diggs the resource, not because it is good, but because people who Digg popular resources early are rated higher than people who don't).

Tuesday, March 06, 2007

Public Subsidies are Commonplace

Letter to the editor of the Times & Transcript, sent last Friday, published today.

It should not be surprising to see Halifax and Prince Edward Island investing public money in order to attract musical acts such as Faith Hill and Aerosmith.

It is a common practice for governments to subsidize events that may become tourist draws. Subsidies and sponsorships for sporting events are commonplace, for example. That is how we are obtaining a stadium for the upcoming track and field event.

Governments also subsidize such things as conventions, conferences and trade shows. Hence the request from the City of Moncton for such facilities as a Convention Centre and replacement for the Colosseum. Even the recent Hub Cap Comedy festival received municipal support.

These events are sponsored by governments with the recognition that there will be a long-term payoff both for the city and for the government that sponsored them. That is why governments at all levels invest in these events. And it is working; my property value has increased 40 percent in just four years, and my taxes correspondingly.

The entire community needs the exposure and income these events bring. When I am speaking internationally, it brings me pride, and subtly underscores my point, to be able to talk about the Rolling Stones concert. We are represented as a city of progress and prosperity. A city looking forward to the future rather than cowering in the past.

The Times & Transcript is walking on thin ice questioning public support for musical events. Its petty campaign against our regional rivals threatens to undermine our investments in ourselves. We need to be able to bring forward every resource at our disposal, not to be limited by some self-styled campaign of funding purism.

We should not be concerned about what our neighbours are doing. Let's focus on what we are doing, and how we can do it better. Let's stand behind the people who take risks in our city by lending our support to their efforts, not by knocking down their rivals.

Saturday, March 03, 2007

Making Software, Making Money

I am sympathetic with Dave Tosh's plaint. But only to a point. And not so far as when he says that open source does not work.

I hate to be the one to state the obvious, but: the fact that open source didn't work for Dave Tosh doesn't mean that open source doesn't work. No more than you would say, for example, that capitalism doesn't work because it didn't work for some failed businessman.

Let's look at what he says more closely. He writes,
Having worked on a true open source project for more than three years I am now more convinced than ever that the open source model does not work.

The project to which he refers, of course, is ELGG. This is a social-network slash content management system written in PHP. It has been reasonably popular, particularly in the education circles for which is is designed.

But - can I be honest? The PHP content-management system field is pretty crowded. It was crowded even when ELGG launched. There's Drupal and PHP-Nuke and Post-Nuke and Moodle. WordPress, for those who are more blog-oriented. And that's just the popular PHP ones.

And then on the social network side there's Friendster, Orkut, Linked-In and all the rest. LiveJournal, which has been around for ages (and open source, too). The market has been pretty crowded over there.

Here's the question to ask: would ELGG have had any measure of success had it been subscription-based software? Honestly, I doubt it. There was a niche, which was sufficiently small when it was free, and would have closed quickly were people asked to pay.

Tosh continues,

Sure, in an ideal world, it is the way to go. There should be transparency of data, users should be able to get stuck into the source code if they wish and tweak the software to meet requirements, customers should not be locked into long term contracts with vendors deliberately making it difficult for them to upgrade, change their service etc - that is in an ideal world, not the world we all operate in.

It is, in fact, the world I operate in. I don't deny that there have been challenges. And to be frank, I get really frustrated by open source code a lot of the time. Readers of these pages know of my struggles with Ruby on Rails, my wrestling with Drupal and my most recent bout of dissatisfaction with OpenOffice.

But I keep in mind - as I type in Firefox on my Ubuntu machine (I could use an Apple or a PC, they are both sitting right beside me here) - that Windows has cost me more frustration than all the rest combined. Outlook alone has been more of a problem than all the open source applications combined, and I don't even use it!

My world - as a 'customer' - is the one Tosh describes, and I won't go back.

OK, so what precisely is it that Tosh finds doesn't work in open source? I'll skip his short section on the advantages of open source and get straight to the disadvantages:

1) No real revenue stream outwith the big guns like Linux, My SQL and Apache - it is worth noting that this took years to build up and most of these project are backed by big business such as IBM.

There's a lot packed into that short statement.

Let's first begin with the fact that not one of Linux, MySQL and Apache had its origin with IBM or any other such company. IBM, when it tried to build an operating system, came out with OS/2. It was a failure - not because it didn't work, but because it didn't sell.

Next, let's observe that none of them was created with the intention of creating a revenue stream. As open source software goes, ELGG is almost unique in its intent to create a revenue stream for its producer. True, things like mySQL and even Moodle have made some money for their originators. But that isn't why they were created.

The successful open source projects did, as Tosh notes, take years to build up. They also benefitted from a fairly large development community. In a sense, generating developers is like generating sales. If people think the software is worth the investment, they'll add to it. But if it's a No sale - well, then, no developers.

Apache would have been nothing without millions of people building websites. Linux has benefited from XWindows, KDE, Gnome, and dozens of other applications. MySQL would be nowhere without the database-driven applications that extended its capacity. Look at Moodle and look at all the Moodle modules. Look at all the Drupal modules. This is in addition to the large communities working on the base code.

And that's the bottom line. Open source projects are not products intended to produce revenues. It is a mistake to think of them that way. Open source software is developed in order to satisfy a need, one typically experienced by the developers themselves, and an open source project is not a commodity, it is a community.

Yes, people need to earn money in order to live. This is true for every single person that works on open source projects. But making money from the open source product itself is very much the exception, not the rule, and depends on a lot of things falling into place.

2) You cannot get investment, which is crucial for growth, for an open source product released under a GPL license

Strictly speaking, this isn't true. Both Red hat and Ubuntu have benefited from investment.

More generally, though, this is a line I hear all the time. The line is that, in order to attract the venture capitalists, you have to have some intellectual property. Some patents, say, or some proprietary code. Because they need to be buying something, and if the code is open source, they aren't getting anything for their investment.

And, more generally speaking, this is true. Venture capitalists succeed by creating conditions of scarcity for things of value, so they can charge money for access to those things. The point of the initial investment is in order to take the thing of value from the concept stage to implementation. They objective - their only objective - is to create a return on their investment.

If you are selling to venture capitalists, then you need to play by their rules, even if they are short-sighted and predatory. Because they're the ones with the money. And venture capitalists really don't like things like co-ops and shared source and community development because they get in the way of ownership. And ownership is how VCs make their money.

So, yes, you need to get out of open source if you are going to sell to VCs. The question is, do you really need to sell to VCs to make a decent living? no. Of course you don't. Do you need to sell to a VC to create growth? No, again, no. Because if you hit that sweet spot, your problem will be too much growth, not too little. And at that point, you can develop a revenue stream and investment is surprisingly easy to obtain.

It's hitting that sweet spot that is the trick, of course. As hundreds of thousands of software developers around the world will tell you.

3) Open source works as a two way street - however the reality is that most people just take and then complain if it isn't quite right, hardly a fair balance.

That's because open source isn't about being a quid-pro-quo.

My relation with ELGG, if I am to be perfectly honest, is exactly as described. I haven't contributed one line of code to the software. Not one semi-colon. But I have made statements that could be called 'complaining', both about ELGG and the more recent Explode!

Is it fair, then, to describe me as someone that just takes? Well, no, that would be absurd. I am, I hope, well known for the fact that all the work that I do is available for free, to be used pretty much as the user wishes to use it. It is probably no secret that I am working pretty hard to make the rest of it - that bit owned by the Government of Canada - open as well. And in passing to free the combined resources of the government to the community.

Dave Tosh reads my newsletter every day (well, I hope he does ;) ) and yet he doesn't write a word of it. Do I complain? No. Because the whole point of what I do is that people will use it, and not have to do it themselves.

Yes, with open source software, it is nice to have more developers, especially people creating modules and skins and applications and the like (it's like the Steve Balmer dance... "developers developers developers developers"). But few developers doesn't mean that the open source model is flawed. It means the community doesn't need the application. I live with the same harsh reality. Few readers means that there is a limited demand for what I write. It's not pretty, but that's reality.

4) You get treated the same as a commercial product but aren't receiving any money for your goods or services. I can recall a situation where we didn't reply to a bug report within 36 hours and were completely ripped for it, yet my old institution accepted waiting 4 weeks for a response from Blackboard regarding a critical bug report for what was institutionally critical data - WTF?
So people act inconsistently. The person who ripped you for it would probably rip into Blackboard as well. Meanwhile, your institution probably doesn't expect to ever get a response from an open source project - they probably think four weeks is good service.

Also, people learn to treat you the way you define yourself. It has never been a secret that ELGG is a business and that the developers are trying to make money from the software. This is why they treat ELGG as a business. If it were clear that ELGG is a hobby, they would react differently (of course, they might not be using ELGG to begin with).

It's like Rails. I ripped into the Ruby on Rails in some of my previous posts here. Not because I thought Rails was a business, but because when I went to the Rails site or read what rails developers wrote, I heard nothing but about how simple it was and how automatic everything was. So I installed it, believing what I was told. And then it turned out that Rails did not behave at all as advertised, and I complained, bitterly, for having wasted several good weekends of my life on this stuff.

You told people you were a business. You were treated as one. Case closed.

5) You can't raise funding to work on open source projects if you are the developers - were you to talk about it or perhaps provide services, that is different but because you are directly tied to a 'product' it is seen as bias if public funding was directed your way.

Why not? This happens all the time. How many times have I seen people raise funding for their own projects? It must be in the dozens. The hundreds!

Just across the hall from me Rod Savoie, working with some people across the road at the U de M, developed a software application called Synergic3. They went out looking for people to invest in the product and raised, oh I don't know, something like two or three million from ACOA. They also got a major software company involved and are even dragging me into it.

The fact is, hundreds of things get funded every year, many of them open source, and a lot of that funding goes to the developers. The fact that yours didn't is unfortunate, but you can't generalize from that single case. It may be that the applications weren't sound, that they weren't supported by influential academics (yeah, I know, it's a stupid system, but...), or that the funders didn't see a need for the product.

But the mere existence of the hundreds of other funded projects (seen the E-Framework lately?) shows that this statement is simply false.

6) Open source projects which are deemed to be successful are often not truly open source - they can often hide many features, hold back developments and so on. With good reason - they need to in order to survive!

Care to give us an example of this?

The only cases like this I can think of are cases like Totem (or as I refer to it, the world's worst media player). Because most of the codecs are proprietary, they can't include them with the product. They have to be shipped and installed separately (with a nudge and a wink).

Yes, there is a lot of free software that comes, as they say, 'crippled' - WS-FTP was like that for ages, PaintShopPro started out that way, and the list goes on. But these free applications are not open source.

In fact, I think it's very difficult to create a crippled open source product, because (after all) you're opening the source.

7) Most people in the decision making positions don't seem to trust open source, the DFES/BECTA scenario is proof of this.

I wouldn't take DFES/BECTA as proof of anything. But there is no denying that open source faces competition from proprietary software in many environments, including many educational institutions.

It does not follow that all decision-makers oppose open source. And it doesn't follow that they oppose open source acros the board. After all, these same systems are using Apache servers. Some of them are using Firefox browsers. It's a pretty mixed landscape.

It's also one that is changing. After all, the gist of the story in the link is that the British MPs are lobbying for a change in the DFES/BECTA policy, so that open source will be allowed. I've been reading a lot of stories like that recently.

Tosh continues,
There are exceptions. However, by and large, most open source projects start out in good faith and fall by the wayside or simple remain as cool projects to tinker away on due to a lack of money.
Well guess what. So do most businesses. So do most things of any sort that people do.

It's like gardening. Many people garden. Sometimes people think that they can make money from their gardens, and so they become horticulturalists or farmers or something. Some of them make a decent living, and many others fail because they couldn't make the revenues cover the expenses, because they couldn't get people to buy their strawberries, or whatever. A very small percentage of people make a million from it - these are the DelMontes of the world, and usually what they have to do is to get the government to seize land from illiterate peasants or something.

And the vast majority of gardens remain hobbies. Not because of a lack of money. Not because we should always allow gardeners to seize land from peasants. But because it genuinely is a hobby. Something they do for fun and sometimes for the common good.

It is important that you do not confuse this with a lack of traction or a good product. Take Elgg for example; Elgg is the most popular white label social networking platform in the world powering over 2000 networks. However, Elgg could power 100,000 networks and it would make no difference - there is no revenue stream as we give everything away under a GPL license.

If there were 100,000 ELGG networks then it is very likely there would be a revenue stream.

Yes, the revenue stream would not come from the sale of software. Because that's not how GPL works.

For one thing, it would be a lot easier to get funding for ELGG projects. Funding agencies take 100,000 a lot more seriously than 2,000. For another, there would be many more opportunities to offer workshops and seminars. Moreover, the service and support market would be a lot larger. And finally, 100,000 is not a static market - chances are that software that has 100,000 users is on its way to a million, and with numbers like that the venture capitalists become very interested. Because even if you have licensed the software under GPL, you can still sell a commercial version.

The number does matter. It is foolish to think anything else.

Tosh continues,
If you transfer this to the web 2.0 market - most successful web 2.0 products had/have venture capital or angel investment of some description; flickr, youtube, facebook, myspace (founder actually sold spyware to make money, funny that), last fm - why? They are closed, commercial services where there is an obvious business model, investors can put their money into something tangible that can be acquired by a computing giant.
We have already discussed above the relation between proprietary software and venture capital. The VCs are only interested in things they can own, ethics be damned. That's why they're interested in spyware. That's why they don't care if YouTube was running 'pirated' videos; it's just a matter of risk versus reward.

What made these things successful was not that they were closed. People didn't look at them and say, "Oh good, closed software, I think I'll use that service." No, they were successful because they were popular, because they offered a service - including some services that just skirt this edge of the law - that people wanted.

So here's a formula for success - if you are smart enough or lucky enough to create a product millions of people will want to use, and if you are willing to sell user data and lock in user accounts, and if you are willing to surrender control of your company to VCs with an even lower ethical standards, then there is a one-in-a-million chance that you'll make your millions.

Most people decide that life is better spent doing other things.

Take the latest Cisco purchases. There are plenty of very good white label social networks out there; so why did Cisco pay out a rumoured $25 million dollars for crap technology? One can only assume it is because the services they bought were closed source and commercial, it was certainly not because those product were good!

Who knows why people buy software companies. Sometimes the acquisitions make no sense. It's like when Salon bought the WELL, for example. There was no proprietary software, no nothing, just a community of users. But Salon thought the name was worth the money (and many of the users are still there).

See, the presumption being touted here is that there is only one value proposition: proprietary software. But it's just not so. You can find a market for just about anything.

I remember when Slashdot was sold to Andover. It certainly wasn't because of the software; Slashcode was (and is) open source perl code. But it made the developers millions. And Slashdot is still out there today, attracting script kiddies like there was no tomorrow. Around that same time we thought we were going to sell NewsTrolls to Think Geek. We came close, but we didn't have the readership. There but for the grace of the internet gods go I.

Why did Cisco buy Tribe? Especially if they had already acquired Five Across? Perhaps they are clearing out the underbrush, so there's room for them to make a (much belated) social network play. Or maybe there's something much more interesting happening behind the scenes.

Therefore, what is the future for open source innovation? Developers need to live and unfortunately the current climate can and will turn many away from building true open source applications, and the reality is that most users won't care - they just want services that work, so perhaps there is a lesson in all this; listen to the user.

OK, well, like I said, I am sympathetic. How can Dave Tosh make money?

Well, he could get a job.

Because he is right in this, at least: if you plan to make and sell software and make a million doing it after just three years' work, then open source isn't for you.

Doing it commercially isn't either, though.

Either way, it's a long shot. And you really have to pay your dues. And you have to wait for the results to come in. And sometimes, you never make a million.

I mean, look at all the commercial applications out there. How many of them do you think made their developers a lot of money? Most of the stuff we see on the shelves was created by Microserfs slaving away in their cubes, the profits being made by the gnomes with the investment money.

Look at all the businesses out there. The millions of mom-and-pop restaurants. Only a small handful become a McDonald's or a KFC. Most restaurants fail. Sad, but true - people pick the wrong location, the wrong menu, the wrong chef, whatever.

Being in business for yourself means trying over and over and over again. In software, it means creating something new every few weeks, not working on the same thing for three years - at some point, if you're in it for money, you've gotta say, "Let's let this turkey go." Of course you keep supporting it, because it has created a community for you (aka the network you can use to launch new software). But at a certain point you say this software isn't working for you.

Personally, I think that if the ELGG developers picked up on Explode!, worked like crazy, did it exactly right, and got a lucky break or two, then it would acquire enough traction to create a revenue stream. Then this (coupled with the network) could be turned around and sold as a commercial edition to the business community. And thus the foundations of the enterprise that might actually make a million can be laid.

Yeah, it's a long shot. And i wouldn't blame Dave Tosh for saying that his failures so far prove that it is foolish to try to work for yourself or create your own business.

But my understanding of people who actually are in business for themselves is that the money is secondary. Just like in open source. The reason they work for themselves is that they like the freedom, the opportunity, the challenge, the creativity. And just so, people who work in open source like the community, the developers, the sharing and the software.

And now we come to the other side of the story.

Yes, Dave Tosh could work on proprietary software. Then, if the conditions are just right, he can make a living selling his creations.

More likely, he'll be faced with the need to do contract work or maybe even to get a job. Because writing software is a lot like writing - it's really hard to make those sales. Most people who do it professionally live in very little money.

But what will also have happened is that he will have turned his back on the community. I would not, for example, be criticizing his software. Because I wouldn't care about it (more accurately, because the bar is set a lot higher). I'd be very unlikely to mention it in the newsletter, and I certainly would never code any modules for it.

OK, sure, that's just me.

But most of the community would react that way.

Not because they hate Dave Tosh. Most people would probably say he's a nice guy. But now he's kind of like the guy who sells insurance or the guy who writes subroutines for Microsoft. They're nice people, but you don't see any particular reason why you should help them with their work.

Yes, there is a world out there, outside the open source community.

It's a world that I see a fair amount of, from my position. It's a world where 'community' becomes 'competition', where 'network' becomes 'rival'. It is a world of secrets and NDAs and underpaid labour and coding bad software because that's what the customer wants. It's not a very happy world and the people I meet are not anything like remotely as interested in their work as the open source community, not even those who are only doing it as a hobby.

Yeah. You can make a living selling proprietary software. Maybe. You can even make a million doing it. Maybe. But if you love making software, then it's pretty hard to give up the community and the camaraderie you would need to give up to make software and make a million selling it. It really does cost you your soul.