Thursday, February 26, 2009

Thoughts On Solutions

Submitted to the UNESCO OER Discussion, February 26, 2009

1. We have to accept that in some communities there will be priorities other than education. The need for clean water and safe food may be much more pressing, for example. In this discussion, we read of places without access to electricity. Providing electricity in such cases is of primary importance. Electricity - whether solar or wind generated or even from hydro or human power - can do so much more than power computers: it provides water, refrigeration, light, and more. Electrification is a key requirement, and in my estimation, for such regions, talk of OERs is a distraction.

2. The corollary to that is, that the design of a program regarding OERs should not be based on the needs of regions where other priorities - such as electrification - are significantly greater than the need for OERs. We may be told (by publishers?) to focus our efforts on non-digital technologies. This is a distraction and a distortion of the idea of open educational resources. Attempting to employ non-digital means to distribute OERs is significantly more expensive; this is why the idea of OERs really became feasible only with the advent of digital technologies.

3. After basic civic infrastructure, and after electricity, the next prior condition for OERs is a viewing and playing platform. It is clear that significant advances have been made here in the last few years, catalyzed by such projects as the Simputer and the One Laptop Per Child, and ultimately made possible by flash memory, lo wattage CPUs, and advanced display technology. The availability of low-cost computing greatly increases access to digital materials, and projects that increase such access (microloans for the purchase of a netbook, for example) should be contemplated. While mobile phones are touted as a viable platform, this should be regarded with some caution: data rates on mobile phones are very high, displays and computing capacity are minimal, and mobile phones are closed systems (the telco retains control over the platform (the hardware and operating system)).

4. Given access to suitable platforms, the next major requirement is typically construed as access to open educational resources themselves. As backbone connectivity is often prohibitively expensive, we often hear proposals for the encoding of content onto flash memory and DVD for distribution and possibly sale. While access to materials is desirable, and should be promoted, this misconstrues the need. Just as a telephone system is valuable for the conversations it can carry, so also a computer network is valuable for the communications it sustains. It is not a substitute for lack of telephone connectivity to simply record some telephone conversations and distribute them to recipients. The next requirement after computation, therefore, is not content, it is connectivity.

5. Where possible, computers should be deployed in clusters and network connectivity established. Even if backbone connectivity to the internet cannot be sustained (because of cost and availability) local connectivity can be used to share communications and resources. Wireless mesh networks, wider area WiMax network, and regional iBurst networks, are all either viable or soon to be viable technologies. With such networks, the physical distribution of resources (ie., content on flash memory or DVD) can focus on single nodes, to be propagated as needed from there via the local network.

6. The production of open educational resources ought to be thought of as a community process, with the distribution of these resources established through a process of sharing rather than giving or sales. When the various considerations regarding the sustainability of OERs are taken into account, as I do here then it seems clear that, unless the creation and management of OERs is community-based, the result will be a requirement for a significant overhead. When we think of OERs as something that are given, then we are inclined to channel resources to the givers, in order to sustain the giving. The givers, however, are typically those least in need of resources: it is no coincidence that the givers are large institutions such as MIT, Stanford, and Open University. But it is a misapplication of funds to channel resources to such large institutions, the entities in the value chain least in need of additional subsidy and support.

7. Models and instances of knowledge creation and sharing ought to be instantiated and propagated. The recent effort by the Indian government to document and share traditional and regional knowledge is an excellent case in point. Such initiatives depict the creation of OER programs not merely as the passive recipients of knowledge, but as active creators and sharers of knowledge. An OER training package is proposed, not so much on the receipt and use of OERs, but rather, on the creation and distribution of OERs. As people begin to create and share their own knowledge, they begin to see the value and insight in others'.

In summary:

- the key is to focus on connectivity, not content
- low-cost 'netbook' computers are encouraged, with an emphasis on local connectivity
- resources should not be directed toward 'givers', as they are the entities least in need of support
- resources should be directed toward helping intended 'recipients' share their own knowledge

I hope this is useful.

Update - February 28, 2009

I would like to respond to Moyomola's comment,

Bolarin, Moyomola (ICARDA) wrote:
> Dear Stephen,
> Your thoughts on solutions raised many questions within me, one of which I can only express now. To begin with, in your summary, you think “the key is to focus on connectivity, not content”
> My question: Will there be a need for connectivity where there are no content to connect? In which case availability of “content” is the driving force for “connectivity.”
Again, as I stated earlier, in small villages where there is no electricity, the priority may be to provide electricity, not OERs.

That said, it must be stressed, by 'connectivity' I do not mean broadband access to the rest of the world. As I tried to emphasize in my previous post, what I meant most especially was connectivity with each other. This can be accomplished without paying any internet access fees, if the computers are equipped with wireless mesh network capacity.

You ask, "will there be a need for connectivity where there are no content?" That is like asking, "will there be a need for a telephone service without pre-recorded messages?" It is to mistake the internet as a content-access system, when in reality it is a communications system. The internet is much more than merely a means of receiving content.

If people are connected, they will produce their own content. If they have a means to create, to communicate, to record, share and save, they will create their own knowledge and share this knowledge. We know this because this is what has happened in all other areas that have received the internet. In classrooms, in businesses, in homes, people are sending messages back and forth, creating accounts on social networks, uploading photos and videos, writing poems, creating software, and so much more.

Now this does not mean that there should be utterly no content, and utterly no connectivity. I believe that it would be useful to have a computer in local networks that contains a library of content - a copy of wikipedia, for example, the Stanford Encyclopedia of Philosophy, Open University courses, Flickr photo libraries, WikiEducator courses, maybe even my own website (heh). Content that would be selected and downloaded or brought in on Flash memory or DVD, once, for the whole community, after discussion of the matter.

But even this content is much less useful without community connectivity. There is a big difference between reading something all by yourself, and reading it as a part of a group and creating and sharing things based on it. Indeed, the need for content is generated by community and communication. The ability to use content is created through community and communication.

Finally, community and connectivity, unlike mere content only, are a means of generating value and wealth for a community. By capturing and creating its own knowledge, a community is creating something of value. Whether or not this value can be monetized, what is important is that the members of the community are not merely passive recipients of their learning, but also actual creators of that learning. By developing this capacity, they become able to take part in online commerce, first among themselves, and then with the rest of the world.

Moyomola, I have never been to your village (but I would love to visit one day - does it have a website?). But I have been in communities large and small, in South Africa and Lesotho, in Colombia, in Malaysia. What I see is not simply a desire to read stuff and to watch TV, but to create and share. As soon as people see what can be done with this technology, very simply and very cheaply, their faces light up and they want to begin to create. And they don't stop.

-- Stephen

p.s. I would like to add two remarks, to add some context for the rest of the discussion members.

First, I would like to make the observation that, insofar as there is a need for content, as described above, the content already exists or can continue to be developed through voluntary (community-based) effort. The same is the case for software, with organizations developing packages of free community applications for learning communities. for example. These are available for free; there should be no need to purchase commercial packages.

Second, where expenditures are required, it seems to me, it would be much more appropriate that they occur in recipient countries rather than donor countries. For example, suppose it were determined that there were a need to create copies of DVDs to distribute community applications and contents. Then, blank DVDs should be purchased from local vendors and local staff hired to create copies. Additionally, if it were determined that certain network infrastructure were needed to create community mesh networks. Then, an enterprise should be established in a recipient country to manufacture and distribute these components. To reiterate the wider point: it just seems wrong to me to see the bulk of money intended for world development end up in the pockets of people and agencies based in North America and Europe.

Tuesday, February 24, 2009

Connectivist Dynamics in Communities

I was asked, "if you could give me some orientation on how I could integrate some questions in the survey (or maybe in the Social Network Analysis) that explain or prove the existence of connectivist dynamics inside the community and if it’s impact can be tested."

That question, in turn, begs the question of what exactly would constitute connectivist dynamics. On the one hand we could say simply that it's network dynamics, and that if we detect network properties (as revealed, say, in social network analysis) then we have connectivist dynamics. But I don't think that just any network constitutes a connectivist network. What distinguishes a connectivist network is that it produces connective knowledge. This is what makes it suitable for learning.

So what constitutes connective knowledge? In my paper An Introduction to Connective Knowledge I describe a 'semantic condition' consisting of four major elements. These elements distinguish a knowledge-generating network from a mere set of connected elements. This, I would say that a test for these four elements would identify a connectivist dynamic within a community.

1. Autonomy - are the individual nodes of the networks autonomous. In a community, this means, do people make their own decisions about goals and objectives? Do they choose their own software, their own learning outcomes? If they are in the network, and function within the network, merely because they are managed - because they're told to be in the network and told what to do in the network - then they are merely proxies, and not autonomous agents. proxies do not produce new knowledge. Autonomous agents, however, do.

2. Diversity - are the members of the network significantly different from each other. Do they have distinct sets of connections? Do they enter into different states, or have different physical properties? Are they at different locations? In a community, this means, do people speak different languages, come from different cultures, have different point of view, make different software selections, access different resources? If everybody does the same thing, then nothing new is generated by their interacting with each oother; but if they are diverse, then their participation in the network produces new knowledge.

3. Openness - does communication flow freely within and without the network, is there ease of joining (and leaving) the network? In a community, this means, are people able to communicate with each other, are they easily able to join the community, are they easily able to participate in community activities? In practice, what one will observe of an open community is that there are no clear boundaries between membership and non-membership, that there are different ranges of participation, from core group interaction through to occasional posting to reading and lurking behaviour. If a community is open, then it sustains a sufficient flow of information to generate new knowledge, but if it is closed, this flow stagnates, and no new information is generated.

4. Interactivity and Connectedness - is the knowledge produced in the network produced as a result of the connectedness, as opposed to merely being propagated by the connectedness? If a signal is merely sent from one person to the next to the next, no new knowledge is generated. Rather, in a community that exhibits connectivist dynamics, knowledge is not merely distributed form one person to another, but is rather emergent from the communicative behaviour of the whole. The knowledge produced by the community is unique, it was possessed by no one person prior to the formation or interaction in the community. Such knowledge will very likely be complex, representing not simple statements of fact or principle, but rather, will reflect a community response to complex phenomena.

My contention is that, if these four dynamics are detected within a community, then a connectivist dynamic exists within that community, and (consequently) the probability of that community producing (new) connective knowledge is increased.

Thursday, February 19, 2009

Template for OER Success Stories


* Participants: A note about the learners and educators involved. Who provided the solution, for whom?

* Context: A note about the context, e.g. socio-economic conditions, geographical region, rural vs. urban, available internet access, ...

* Solution: Please give details of the solution here.

* Key Barriers: Please give some of the key barriers to access addressed by this solution. (Ideally referring back to our list of access issues.)

o Access in terms of awareness. (Lack of awareness is a barrier to OER.)

o Access in terms of local policy / attitude. (Do attitudes or policies pose barriers to using OER?)

o Access in terms of languages. (How well does the user speak the language of the OER?)

o Access in terms of relevance? (Is the OER relevant to the user?)

o Access in terms of licensing. (Is the licensing suitable / CC?)

o Access in terms of file formats. (Are the file formats accessible?) Access in terms of disability.

o Access in terms of infrastructure (Lack of power/computers makes access hard.)

o Access in terms of discovery. (If the OER is hidden, not searchable, not indexed, it's hard to find.)

o Access in terms of ability and skills. (Does the end user have the right skills to access?)

* Scalability: Please comment on how your solution might "scale".

* Questions: What questions should we be asking about this solution that will add to our understanding of enabling access to knowledge and learning resources?

* Implications and adoption: What are the implications of this solution for OER and enabling access to knowledge and learning?

* Links: If there are any web links to initiatives or projects, please include them!

Tuesday, February 17, 2009

What Ignatieff Needs to Do: Talk To Canadians

Responding to James Morton:

If you want to convince us, then you'll have to show us evidence of him being thoughtful and intelligent.

I haven't seen that; 'smarmy' is the word I'd use to describe him so far. He has to drop the politician act and begin talking seriously with Canadians about the problems we face.

He will have to trust that we can understand him.

Let's see some video of him - where's his YouTube presence? The top videos of him are comedy sketches - and Conservatives! Let's see him do some real communicating. Let's see *how* he thinks, not how he plays politics.

Right now he's up against a much tougher opponent in Harper than he cares to admit (he should have taken him out when he had the chance). And Harper is getting to people - don't believe the polls, particularly, Harper is building strength on strength that will not be reflected until these strengths are tested.

Unless Ignatieff actually *talks* to Canadians, he will lose. And he must talk directly to Canadians, bypassing a media that has, by and large, turned its back on the Liberals and the left in Canada.

Monday, February 16, 2009

Access2OER: The CCK08 Solution

Contributed to the UNESCO OER discussion list, February 16, 2009.

I believe it would be worth a few words to describe a course run by George Siemens and I last fall. The course was titled 'Connectivism and Connective Knowledge'. It was offered through the University of Manitoba as a credit course, but we also offered the course for free to any person interested. It came to be called the MOOC - Massive Open Online Course.

* Participants: A note about the learners and educators involved. Who provided the solution, for whom?

George Siemens and I acted as instructors. Logistical internet support was offered by the University of Manitoba, by Dave Cormier, and by myself. 24 students registered and paid fees to the University of Manitoba. 2200 people signed up for the course as non-paying participants. We offered all aspects of the course to both paying and non-paying participants, with the exception that paying participants submitted assignments for grading and received course credit.

* Context: A note about the context, e.g. socio-economic conditions, geographical region, rural vs. urban, available internet access, ...

Participants registered from around the world, with an emphasis on the English-speaking and Spanish-speaking world. The course was offered in English; Spanish participants translated key materials for their own use. The course attracted a wide range of participants, from college and university students to researchers to professors and corporate practitioners.

* Solution: Please give details of the solution here.

The course was designed to operate in a distributed environment; we did not centralize on a single platform or technology. With the assistance of University staff and Dave Cormier, George and I set up the following course components:
- a wiki, in which the course outline and major links were provided
- a blog, in which course announcements and updates were made
- a Moodle installation, in which threaded discussions were held
- an Elluminate environment, in which synchronous discussions were held
- an aggregator and newsletter, in which student contributions were collected and distributed
We encouraged students to create their own course components, which would be linked togethere with the course structure. Students contributed, among other things:
- three separate Second Life communities, two of which were in Spanish
- 170 individual blogs, on platforms ranging from Blogger to edublogs to WordPress and more
- numerous concept maps and other diagrams
- Wordle summaries
- Google group, including a separate group for registered participants

* Key Barriers: Please give some of the key barriers to access addressed by this solution. (Ideally referring back to our list of access issues.)

o Access in terms of awareness. (Lack of awareness is a barrier to OER.)

Given that we attracted 2200 people, we addressed the lack of awareness in some fashion. The course was not widely advertised; it was posted on George Siemens' and my newsletters. That said, these newsletters are leading sources of information to a community that would be interested in the course.

o Access in terms of local policy / attitude. (Do attitudes or policies pose barriers to using OER?)

One of the major attractors was that the course was offered by the University of Manitoba. It was necessary to convince the university to offer an open course, which George Siemens managed by adding the enrollment component. In one sense, the paying students funded the non-paying students; in another sense, offering the course as an open course created sufficient marketing to attract the paying students. The Universioty was satisfied with this result and will be employing the same model again.

o Access in terms of languages. (How well does the user speak the language of the OER?)

We did not provide multilingual access. However, because we encouraged participants to create their own resources, we created the conditions which enabled a large self-managed Spanish-language component to the course.

o Access in terms of relevance? (Is the OER relevant to the user?)

The design of the course - as a distributed connectivist-model course - created a structure in which the course contents formed a cluster of resources around a subject-area, rather than a linear set of materials that all students must follow. because participants were creating their own materials, in addition to ther resources found and created by George Siemens and myself, it became apparent in the first week that no participant could read or view all the materials. We made it very clear that the expectation was that participants should sample the materials, selecting only those they found interesting and relevant, thereby creating a personal perspective on the materials, that would inform their discussions.

o Access in terms of licensing. (Is the licensing suitable / CC?)

All course contents and recording were licensed as Creative Commons Attribution Share-Alike Non-Commercial.

o Access in terms of file formats. (Are the file formats accessible?) Access in terms of disability.

We did not try to provision access in all formats; rather, we employed a wide variety of formats for different materials and encouraged mash-ups, translations, and other adaptations.

o Access in terms of infrastructure (Lack of power/computers makes access hard.)

We experienced a full range of issues. Basic course material was provided in HTML and plain text, however, various course components reqired more bandwidth. The use of UStream proved useful to nobody, as the bandwidth requirements were too great even for instructors. Skype worked well for planning and recording, but not for instructing. Elluminate was effective with limited bandwidth, but had limits on the number of seats we could offer (it was capped at 200, though to be fair, Elluminate said they would extend this as needed). We made MP3 recordings of all audio for download. Second life was accessible only to those with sufficient bandwidth and platform. Essentially, the structure of the course provided a wide range of access types, making it possible for people with limited infrastructure to participate, while still employing more intensive applications.

o Access in terms of discovery. (If the OER is hidden, not searchable, not indexed, it's hard to find.)

Though we provided search, the major resource related to discover has nothing to do with search. The provision of a daily newsletter aggregating and distributing course content proved to be a vital link for participants. A steady enrollment of 1870 persisted through the duration of the course. In evaluations and feedback participants said that the newsletter was their lifeline. A full set of archives was provided, allowing people to explore the material chronologically, to make up days they may have missed.

o Access in terms of ability and skills. (Does the end user have the right skills to access?)

One of the things that we noticed was that, by combining participants from a wide range of skill sets, people were able to - and did - help each other out. This ranged from people answering questions and poviding examples in the discussion areas, to people commenting on and supporting each others' blogs, to those with more skills setting up resources and facilities, such as the translations and Second Life discussion areas.

* Scalability: Please comment on how your solution might "scale".

We believe that the connectivist model employed in this course might offer a unique approach to the problem of scalability. We could not, nor did we try, to provision everything that was needed for 2200 students. Rather, we created conditions, and encouragements, where participants would provide additional resources for themselves. The role of the instructors and facilitators is essential in this model, but this role is not to provide solutions but rather to establish a basic structure.

Regarding marking and recognition, the course offered an insight that may prove useful in the future. While 24 students were graded by the University of Manitoba, we did receive (and grant) a request for a student from another country to be assessed and graded by their own institution. All assignment descriptions were displayed as part of the open course, and the assessment metric was also distributed, so other institutions could know everything needed in order to provide evaluation and feedback.

* Questions: What questions should we be asking about this solution that will add to our understanding of enabling access to knowledge and learning resources?

I think the main questions are in the area of applicability: would this model work in other areas? Would it work in other communities?

In addition, I am exploring the question of whether this approach can be supported with technology designed specifically for this model, for example, the creation of serialized feeds to automatically create and conduct ohorts through the course material.

* Implications and adoption: What are the implications of this solution for OER and enabling access to knowledge and learning?

The course - which came to be known simply as CCK08 - was a landmark, we believe, in open access, because while providing the formal requirements of open learning - course structure and content, recognition, assessment and credentials - it nonetheless operated on a very different model from other OER initiatives. Materials for the course were not 'produced' in the traditional; sense - rather, the instructors created a framework, populated that framework with open materials already extant on the web, added some commentary and videos of their own, conducted open online sessions and recordings, and created the infrastructure for wide student participation.

* Links: If there are any web links to initiatives or projects, please include them!

Course materials may be accessed from the course wiki:
Here is the course blog:
Here is the newsletter site (note that newsletter publication ceased with the end of the course):
Here are some participant feeds:

I hope you found this contribution useful.

Saturday, February 07, 2009

Poverty and Technology

Responding to an enquiry regarding technologies that will address the issues of global justice and poverty:

I want to first say that while these technologies can play a role, the primary resolution to the issues of global justice and poverty are social and political, not technological. If we are serious about reducing and eliminating injustice and poverty, we will turn our attention first to those measures that perpetuate injustice and poverty: trade imbalances and trade policies, the IMF and global debt, policies that prohibit the establishment of social services and relief (eg., World Bank policies that direct nations to reduce spending on social services), the arms trade, exploitation of resources, patent (especially food and medical patents) and copyright, and the like. Technology will solve none of these problems, and yet, these problems are the primary causes of poverty and injustice worldwide.

That said, the primary role technology will play is to increase capacity. We see this especially in nations such as India, where technology has enabled the population to contribute to the world economy as producers of knowledge, information and services. Communication technologies help previously isolated regions - such as my own own province in Canada - to offer services such as call centres and help services. We in new Brunswick, like the people of India, have also used these ICTs to create new products and services - for example, e-learning applications.

ICTs are an equalizer. They effectively share the means of production with a wider population, lowering the barrier to entry, and enabling people to educate themselves and create, with minimal investment, a productive capacity. There remain challenges to less developed economies - ICTs have to exist, for example, and connectivity (which continues to be outrageously expensive in places like Africa) needs to be in place. But once installed, the infrastructure almost immediately begins to produce knowledge and wealth.

But again - let me stress - these technologies are not replacements for global social and economic policies that promote justice and equity. There is an analogy in the field of education. Because an education is so important to a person's material well-being, it has been suggested that offering education to poor people will alleviate their poverty. But this is not the case; education is a necessary, but not sufficient, condition. People in poverty require a wide range of supports, including health and social services, transportation assistance, daycare and children's support, mentoring and counseling, housing and clothing, and education. But it has become far too common to see proposed educational approaches to end poverty instead of the wider range of measures actually required. This is unfortunate, for it not only discredits education, it perpetuates the condition the programs are (allegedly) intended to solve.

Similarly with technology. Simply sending technology to less developed nations will not relieve those of us in the wealthier worlf the responsibility of adopting fairer and trade and economic policies. Technology does not get us off the hook: we still have to address poverty and social justice.

Thursday, February 05, 2009

Serialized Feeds

A serialized feed is one in which posts are arranged in a linear order and where subscribers always begin with the first post, no matter when they subscribe to the feed. This contrasts with an ordinary RSS feed, in which a subscriber will begin with today's post, no matter when the feed started.

The idea of serialized feeds has been around for a while. This page from 2005, for example, allows you to read Cory Doctorow's novel Someone comes to Town, Someone leaves Town by RSS. And Russell Beattie offers serialized books via his Mobdex serialized feeds system. In 2006, a company called FeedCycle offered what it called cyclic feeds. " For example, if you were to take Moby Dick and divide it into 100 parts, and publish them all in one huge RSS feed, that would be a cyclic RSS feed." Feed cycles, as they have come to be called, have also been used for podcasts. Tony Hirst has written about serialized feeds, demonstrating the concept with services like OpenLearn Daily,

There is no academic literature discussing the use of serialized feeds to support online learning, though the subject of paced online learning has been discussed. Anderson, Annand and Wark (2005) examine the question of pacing from the perspective of student interactions. "Increased peer interaction can boost participation and completion rates, and result in learning outcome gains in distance education courses." But the use of serialized feeds does not automatically increase interactions. It is also arguable that pacing itself improves learning outcomes.

Serialized Feeds: Basic Approach

A serialized feed is basically a personalized feed, because each person begins at a different time. Personalized we data is typically managed by CGI or some other server process which gathers relevant information about the user (such as the time he or she subscribed to the feed) and generating the resulting feed. This feed is then typically identified with a serial number, which is processed when the RSS feed is requested by an aggregator.

This approach, however, raises some concerns:
  • First, it creates a scalability issue. RSS feed readers typically access a web site once an hour. If a CGI process is run for each feed, then each user results in 24 CGI requests a day. Even if the frequency is scaled back, having large numbers of users can place a considerable load on server processing.
  • Second, it creates a coordination issue. If each feed is personalized then in order for interaction to occur there needs to be some mechanism created to identify users of relevantly similar feeds.
These problems were addressed by adopting a cohort system for serialized feeds. But first, some discussion on the structure of a serialized feed.

In order so simplify coding, the gRSShopper framework was used. This allowed courses to be constructed out of two basic elements: the page and the post.

The page corresponds to a given course. It consists of typical page elements, such as page title, content and, where appropriate, a file location, along with default templates and project information. Page content defined RSS header content. Pages are identified with a page ID number. The page also has a creation date, which establishes its start date, set by default to the exact time and date the page was created.

The post corresponds to an individual RSS feed item. While a person subscribed to an RSS feed as a whole (corresponding to a page), he or she receives individual posts as RSS posts over time. A course thus consists basically of a page and a series of posts. Posts are identified by post ID numbers. Posts are associated with pages with a thread value corresponding to the ID number of the page.

Serialized Feeds: Pacing

Pacing is managed through two basic elements.

First, each page defined an cohort number. This number establishes the size of the cohort, in days. Thus, is a page offset number is '7', then a new edition of the course will start every 7 days. In the gRSShopper serialized feeds system, a new, serialized, page is created for each cohort. This page is identified by (a) the ID number of the original master page, and (b) the offset from that page, in total number of days, from the start date of the master page. These serialized pages are stored as records in the database.

Second, each post is assigned an offset number. This number defines the number of days after the start of the course that the post is to appear in the RSS feed. For example, suppose the course starts March 10. Suppose the post has an offset number of 6. Then the post should appear in the RSS feed on March 16.

This creates everything we need to create a serialized feed. To begin, we have a master page and series of associated posts:
  • Page Master (time t days, cohort size c)
    • Post 1 (t+o days)
    • Post 2 (t+o days)
    • etc.
The page also has a set of serialized pages, created as needed, each corresponding to an individual cohort:
  • Page Master (time t days)
    • Serialized Page 1 (t+(cr*1)days)
    • Serialized Page 2 (t+(c*2)days)
    • etc.
Each serialized page has a start date d, which is t+(c*2)days, and by comparing the interval i between the current date and the start date, we can determine which post should be posted in its RSS feed - it will be the post or posts with an offset value of i.
  • Page Master (time t days)
    • Serialized Page 1 (t+(c*1)days)
      • Post i1
    • Serialized Page 2 (t+(c*2)days)
      • Post i2
    • etc.

Serialized Feeds:

Processing to produce the serialized feed occurs in three stages:
  • First, the author creates a master or edited. This creates database records for the master page and for each of the posts associated with the master page.
  • Second, the script creates a series of pages for a given cohort. This occurs when a potential subscriber invokes the subscribe script. Essentially, the script creates the RSS feed content for each day the course runs. These are stored in the database and identified with a cohort number and a publish date.
  • Third, a nightly cron job prints the daily page for each cohort for each course. The idea here is that the script creates a static page that may be accessed any number of times without creating a CGI process. Static pages are stored in a standardized location: base directory/course ID number/cohort offset number
Subscribing to a serialized feed this becomes nothing more than a matter of pointing a browser to the appropriate page. For example, pointing the browser to allows the learner to subscribe to course number 127 (the Fallacies course), cohort number 17 (the cohort that started 17 days after the course was first created). This link is created and displayed by the subscribe script.

This process has several advantages. First, fixes the content of the course to what is currently defined when the student signs up to the course; the course may be edited for subsequent users without changing what was originally defined for previous users. Second, processing time is minimized and front-loaded, allowing the system to scale massively. Third, and most significantly, multiple users are served by the same RSS file. Not only does this save significantly on processing, it also sets up an environment where interaction may be facilitated.