Deprecated: Assigning the return value of new by reference is deprecated in /var/san/www/prod/html/blogs/darcusb/wp-settings.php on line 512 Deprecated: Assigning the return value of new by reference is deprecated in /var/san/www/prod/html/blogs/darcusb/wp-settings.php on line 527 Deprecated: Assigning the return value of new by reference is deprecated in /var/san/www/prod/html/blogs/darcusb/wp-settings.php on line 534 Deprecated: Assigning the return value of new by reference is deprecated in /var/san/www/prod/html/blogs/darcusb/wp-settings.php on line 570 Deprecated: Assigning the return value of new by reference is deprecated in /var/san/www/prod/html/blogs/darcusb/wp-includes/cache.php on line 103 Deprecated: Assigning the return value of new by reference is deprecated in /var/san/www/prod/html/blogs/darcusb/wp-includes/query.php on line 61 Deprecated: Assigning the return value of new by reference is deprecated in /var/san/www/prod/html/blogs/darcusb/wp-includes/theme.php on line 1109 darcusblog » General - geek tools and the scholar

General

Moodle vs. Sakai: Future Directions

Posted in General, Technology on May 6th, 2010 by darcusb – 7 Comments

So in trying to come to a conclusion on Moodle vs. Sakai, it’s easy to get wrapped up in the minutia of feature comparisons and such. It seems to me, however, it’s important to keep in view the larger, longer-term, picture. In this case, that in part involves the strategic directions for these two projects, which will give us a sense of where they might be in five years. To wit, below is my understanding of Moodle 2 and Sakai 3. Am in a bit of a hurry with end of semester chaos, so please correct me if I have anything wrong here, or if I’m missing important details.

Moodle 2

As I read it, Moodle 2 is a significant change to the platform, but a largely incremental one. The primary change appear to be the addition of a repository API, which provides a flexible way to add access to different kinds of resource repositories. For example, there is a plug-in that uses this API to make it easy for users to browse and insert images from Flickr from within the standard Moodle editing tools. In addition, there is work on new features, many of which are outlined in the following video:

In other news, there appears to be independent work on making Moodle friendly for mobile devices. Here’s a video of one such example:

From what I can tell, Moodle 2.0 will be ready for deployment sometime later in 2011.

Sakai 3

Sakai 3, on the other hand, is a more radical change: effectively a complete rewrite of the platform. This rewrite involves building the Sakai-related functionality on top of other, more generic, open source code. The new core code, and hence what the Sakai community is responsible for maintaining, is dramatically less than the old; at present a reduction of close to 90% of the code base! In addition, one of the core developers on the new Sakai core has also become a developer on the Apache Sling project on which the Sakai 3 core is based. This reflects some smart strategic decisions, and should provide a focused, easy-to-develop and maintain foundation.

Following are a few examples we can glean of what this might look like from the design wireframes (visual mockups, not necessarily running code at this point) and the running demo code.

Example: Everything is Content

Michael Feldstein does a good job explaining what this all means. But perhaps some pictures will make the implications more immediately apparent. Consider search. Because existing LMSs are both organized based on courses and tools, its quite awkward to search for content (forum posts, blog posts, assignment or page content, etc.) globally. On the other hand, consider this proposed search UI for Sakai 3: Sakai 3 search panel So one does not go into, say, a forum and search the forum. Rather, one has a search interface that is the same whether you search the entire university’s content, or an individual course. That integrated search interface looks beautiful, and will be instantly familiar to anyone used to using contemporary web interfaces.

A few more screenshots follow below, these all from the actual current Sakai 3 demo server.

Example: Widgets for Integrating Different Content

The new interface is based on widgets, which allows you to quickly add different blocks of features, and move them around as well. Because of the new core foundation, these widgets are also designed to be really easy to develop, so that it’s much easier to add new functionality. In this view, for example, you see a “widget” I’ve added to access my Google Docs documents from within Sakai.

google docs widget in the Sakai 3 dashboard

Example: Editing

One design priority for Sakai 3 is to make editing content much easier. Here we see the clean new editing interface.

Sakai 3 editing toolbar

In addition, all content is versioned, so that you can easily step back through changes, and see who made what changes. Since all content is treated uniformally in Sakai 3, there are no artificial limitations in how this versioning support can be applied. Here’s what it looks like the UI currently:

versioning interface

What I get out of all of this is that Sakai 3 will be more scalable (faster), more flexible, more elegant and easier-to-use: a brand new LMS designed for the needs of the 21st century. The devil will still be in the details of exactly how they implement specific features (gradebook, assignments, etc.) on top of this new core, but I am also really encouraged by what I am seeing of the design process. It demonstrates an attention to detail that is necessary to do this right.

The current roadmap is that it should be ready for large-scale deployment sometime in mid-to-late 2011 [I corrected the year from 2012, per comment below]. Also, there’s some work going on (at Indiana?) in allowing mixed 2-3 deployment; using v2 tools within a v3 context for example.

Law and the Thomson Reuters-Zotero Suit

Posted in General on August 3rd, 2009 by darcusb – Comments Off

Sean Takats blogged awhile back about the dismissal of Thomson Reuters suit about Zotero. I had a chance to read the transcript of the hearing. As Sean wrote, the judge dismissed the Thomson Reuters complaint due to a lack of jurisdiction. What exactly does this mean? From my non-expert read, the dismissal was on a technicality: that Thomson Reuters asserted damages ($10 million/year worth) it could not demonstrate. There was never any discussion of the substance of the suit; rather, virtually the entire hearing focused on the question of how Thomson Reuters came up with the $10 million figure. Answer: a very precise 80% of a vague estimate of number of downloads from the Zotero site, multiplied by $200 (the average price of Endnote software). The judge recognized this as ridiculous, and so threw out the case.

Here’s hoping Thomson Reuters has learned a lesson here and backs off refiling.

A Home NAS and Backup Solution

Posted in General, Technology on May 18th, 2009 by darcusb – Comments Off

So I’ve for awhile now been thinking I need to get more serious about a storage and backup solution for my personal and household data. After casually looking around at alternatives, I finally decide on a solution. I effectively took this information about hardware, with this and this information about using OpenSolaris and ZFS for software, and now have 1 TB of mirrored networked storage (and automated snapshots when I get to it), all for less than $500.

It was far more of a PITA getting OpenSolaris running as I wanted than I’d hoped, but I think the end product is both better and cheaper than the commercial alternatives.

RDFa, Microformats and HTML 5 QOTD

Posted in General on May 5th, 2009 by darcusb – Comments Off

Shelley Powers, on a rather typical IRC conversation on RDFa in HTML 5:

Unfortunately, too many people who really don’t know data are making too many decisions about how data will be represented in the web of the future.
As usual, Shelley nails it.

Boycotting ResearcherID?

Posted in General on April 1st, 2009 by darcusb – 7 Comments

So I just got this note from Thomson Reuters in my inbox, regarding their new ResearcherID service:

When you register with ResearcherID you are assigned a unique author identifier that expressly associates you with your work, helping to eliminate the common problem of author misidentification.
I’m presented then, with an ethical dilemma: do I participate because it’s probably in my personal interest to do so, or do I boycott this in favor of larger principles because of Thomson Reuters’ otherwise reprehensible activities (the Zotero suit)?

My tentative answer: boycott. I already have something that identifies me: http://bruce.darcus.name/about#me.

citeproc-js

Posted in General on March 13th, 2009 by darcusb – Comments Off

Just in case someone out there is interested in helping out, Frank Bennett has been working on a complete rewrite of Zotero’s CSL engine. The new code is designed along functional lines, and intended to be easier to extend and debug and to integrate into different kinds of contexts, as well as be faster. Code is in the xbib repository currently.

Zotero 1.5

Posted in General on February 24th, 2009 by darcusb – 3 Comments

Zotero has recently moved its 1.5 release into beta status. Key new features:

  • XHTML “rich text” notes
  • server-based syncing of libraries (no more rsync to keep my home and office machine in sync)

Aside: I’m a little worried about centralized services, particularly after the Magnolia debacle. Would prefer something more distributed a la laconi.ca.

The Zotero team has also updated its website to reflect the beginnings of the social-networking functionality they will be building out in the future. To wit:

The website functionality also allows you to generate a CV from your library collections. I’ll be adding one once they add support for “smart collections.”

Documenting a CSL Processor

Posted in General on February 15th, 2009 by darcusb – Comments Off

For anyone interested in understanding how citation and bibliographic formatting code might work conceptually, the documentation for Andrea Rossatto’s Haskell implementation of CSL is a great place to start.

I find Haskell a bit tough going at the level of details, but the type definitions are really clear. For example, the Text.CSL.Proc documentation includes a citeproc type, whose signature is: citeproc :: Style -> [Reference] -> [[(String, String)]] -> BiblioData The documentation then usefully tells us this means:

With a Style, a list of References and the list of citation groups (the list of citations with their locator), produce the FormattedOutput for each citation group and the bibliography.

Dan’s Questions

Posted in General on February 8th, 2008 by darcusb – Comments Off

So Dan Chudnov’s been digging into RDF and the semantic web, and posts of few of the questions he’s collected. I’ll answer at least some of them below, though I (still) don’t really consider myself an expert.

  • I have never understood FOAF. It seems like a fine way to serialize a cult-of-personality network (e.g. “see? i’m only two steps from timbl himself!!”) Similarly I don’t get the whole “social graph” buzz either. I’m not a marketer looking to harvest customer data. I’m not doing any affinity indexing just now. What other use is there for saying who my friends are, besides those two?
    I’m actually not that interested in the foaf:knows property. That seems less immediately useful than being able to describe, say, the kind of data in my CV: who I am, my bio info, publications, and maybe also more specific kinds of relations with other people (say collaborators?). I see FOAF as a simpler RDF version of the sort of thing MADS does. Reminds me: I need to update mine!
  • Does the linked data movement really depend upon RDF? It doesn’t seem like it has to. Maybe it could grow faster if it didn’t.
    Let’s turn the question around and ask: if not RDF, then what? You definitely need some model on which to base it, it seems to me, and things like GRDDL, microformats, etc. leave a lot of flexibility on the encoding end. The key for linked data is really the URI, of course, which becomes kind of like a key for a global database.
  • If blank nodes are bad (end of the section), how do I represent sets of literals that mean the same thing but are expressed in different languages? I need to do that right now and I can’t figure out how without blank nodes.
    What’s wrong with multiple literals, each with a language tag?
  • I’m still mainly interested in Description (talking about things) and am completely disinterested in modeling knowledge (what things are and mean) and seem to keep finding examples where arguments about best practices hinge on notions of essential truths …
    You totally lost me on this one Dan!
  • er, that last one was over-long, so I’ll try it this way instead. I think I’m interested in Linked Description, not Linked Data.
    Aren’t we splitting hairs here?

Bitlog

Posted in General on February 4th, 2008 by darcusb – Comments Off

John Resig on a new, free (as in speech), and very nice twitter clone he’s been working on, complete with SVN repo and Trac. As one would expect, it makes very nice use of Ajax.

Hmm … I wonder if/how something like this might be adapted for academics?