Deprecated: Assigning the return value of new by reference is deprecated in /var/san/www/prod/html/blogs/darcusb/wp-settings.php on line 512 Deprecated: Assigning the return value of new by reference is deprecated in /var/san/www/prod/html/blogs/darcusb/wp-settings.php on line 527 Deprecated: Assigning the return value of new by reference is deprecated in /var/san/www/prod/html/blogs/darcusb/wp-settings.php on line 534 Deprecated: Assigning the return value of new by reference is deprecated in /var/san/www/prod/html/blogs/darcusb/wp-settings.php on line 570 Deprecated: Assigning the return value of new by reference is deprecated in /var/san/www/prod/html/blogs/darcusb/wp-includes/cache.php on line 103 Deprecated: Assigning the return value of new by reference is deprecated in /var/san/www/prod/html/blogs/darcusb/wp-includes/query.php on line 61 Deprecated: Assigning the return value of new by reference is deprecated in /var/san/www/prod/html/blogs/darcusb/wp-includes/theme.php on line 1109 darcusblog » 2009 » March - geek tools and the scholar

Archive for March, 2009

Open Linked Library

Posted in Technology on March 28th, 2009 by darcusb – Comments Off

So Jonathan Rothkind picked up the same link that I saw: a fantastically cool, potentially very useful, set of RDF triples that describes a certain widely used library classification scheme which here I will not name because its copyright-holder (the OCLC) has a rather tainted history of aggressively guarding their turf.

What’s cool about this effort is not just the data (and as Ross notes in a comment, it really is very cool), but how it was constructed. As I note in a comment:

If you look at the code, it constructs the RDF from publicly available data: wikipedia.

Jonathan and Ross also note the irony here that all of this interesting and useful work is being done by people from outside the world of formal library committees and such. Ross:

Ed is singlehandedly dragging libraries into the linked data cloud. And it’s all pretty much under the radar and totally outside of any library knowledge. It will be interesting to see how the library world deals with the fact that, despite their person-decades of committees to deal with modernizing library data, the entire outside world will already have been working with it for almost a year.

The real question is, what next? As I also note:

While I don’t doubt the OCLC may well try to get it taken down given their past behavior (come on people; get with the 21st century), would they really have a legal to stand on?

If they do, then they probably ought to go after Wikipedia too. I dare them to do that.

So my public challenge to the OCLC:

Don’t be small-minded here. Recognize efforts like these for what they are: useful and productive enhancements to library data that can be of broad benefit to OCLC members.

If, on the other hand, you cannot yet wrap your arms around the inevitability of this open web of knowledge, then I dare you: send a cease and desist to Wikipedia.

As for me, I’m really interested in starting to link together all this stuff: this new Decimalised Database of Concepts, Ed Summer’s similar effort with the Library of Congress subject headings (which I understand will be coming back soon at loc.gov), the Open Library’s serving up of RDF for all their data, and maybe convincing Zotero to tie in their data as well.

You can see my own small effort at my new site I’ve been slowing working on. Why, after all, should I, as an academic author, leave it up to libraries and publishers alone to hold data about me and my work?

RDFa for Scholarship

Posted in Research, Technology on March 26th, 2009 by darcusb – Comments Off

So Jeni Tennison (who once very graciously helped me out awhile back on the XSLT list with trying to wrap my mind around XSLT 2), describes a very intriguing demo of integrating RDFa into web pages that could point to some interesting possibilities for scholarly publishing. So when you load the page, you see this: RDFa example

So what’s going on here? A JQuery-based plug-in is extracting RDF triples from the page, and displaying that information in the panels on the left. That’s cool enough, but consider what happens if you add a note at the bottom of the page that “Erasmus Darwin was Robert Darwin’s father.” You get this confirmation:

RDFa example

So there’s some natural language parsing going on here that converts that into additional triples. These triples then get added to the human-facing display.

RDFa example

Hmm .. I might have to experiment with this when I get some time.

In other RDFa-related news, how cool is it that the new recovery.gov site makes use of RDFa (via John Breslin), or that slideshare does as well (see Ed’s post)?

Are APIs Enough?

Posted in Technology on March 25th, 2009 by darcusb – 4 Comments

Martin Fenner discusses issues related to a recent post of mine, and concludes, first:

Both Microsoft Word and OpenOffice should open up their citation APIs to third-party tools. This would create better citation tools, allows the easier exchange of documents between authors (journal submissions), and would it make easier for smaller tools such as Papers to integrate with word processors (a workaround is described here).

I guess it depends on exactly how the APIs work. My view is that they probably need to allow users to easily connect different kinds of data stores, which then send back the relevant metadata, which gets embedded in the document in a standard way. Citation processing, then, probably needs to happen closer to the document than the bibliographic plug-in (or whatever).

The citation processing also likely needs to be pluggable. Microsoft’s XSLT-based solution, for example, is not very good. OTOH, if they were, say, to write a fast C# or F#-based CSL processor that fully supported the spec, that problem would go away.

Second, he warns:

If they wait too long, we will probably see the online word processors such as Google Docs, Zoho Writer start adding an API for citations and bibliographies and all of the sudden become very serious alternatives for writing scientific papers.

I’m not holding my breath, given all the other limitations and problems I see in Google Docs. And if they add it, they still need to consider the document interoperability concerns I note above and in the earlier post. But coupling APIs with standardized and embedded document metadata can help solve all of these problems.

What Do I Want in a Next-Gen LMS?

Posted in Teaching, Technology on March 20th, 2009 by darcusb – 5 Comments

With the start of the PInax-LMS effort, a real world use case …

I teach a course that feels constrained by the traditional LMS. It is a course called Global Change (current, not-so-great, syllabus is here), and it explores the breadth of geography through analysis of contemporary issues: climate change, urbanization, water, etc. For that reason, it is team-taught.

So without going through the whole history and background behind where I want to go (partly covered here), let’s just imagine this:

On the first day of class, we collect information of student interests and experience. We use that information to divide up students into various topical groups. Those groups will be responsible for maintaining bookmark links and notes related to online readings they gather, and will work on various projects that consolidate their knowledge of their area of focus. Those groups will also be responsible for coordinating outside class work they will present in class.

In addition, individual students will be responsible for blogging reactions to course discussions and readings; probably on a weekly basis. Students are also encouraged to comment on each other’s work.

So, from my perspective, I need to be to able to easily have access to all that’s going on in the different groups. I’d like to also be able to easily bring in bookmarks and blog entries I create that may exist outside of the course proper. I also need to be able to maintain grades for the students.

From a student perspective, the sort of work and collaboration here needs to be as seamless as possible. Ideally, when they login, they can see what’s going with a network which certainly includes the class, but may not be limited to it. This might look something like Elgg’s dashboard.

Elgg Dashboard

At the same time, students need to be able to understand what is course-specific; so, integration, but also distinction. They should, for example, be able to easily see their grades for the course.

Pinax-LMS

Posted in Teaching, Technology on March 19th, 2009 by darcusb – Comments Off

James Tauber has just created a new project for learning management at Google Code that is intended to be variant of the Pinax suite. Called, sensibly enough, Pinax-LMS, James says of it:

Just to be clear, I’m talking about building a full-blown Moodle-like
system on top of Pinax (although where possible making apps that could
be used outside of just the LMS).
I’m intrigued by this, and plan to help where I can, and as (limited) time permits. Why?
  1. Pinax is a nice, clean, foundation for social web applications.
  2. It’s in turn built on top of Django, which is a nice, clean, foundation for all kinds of web applications.

I guess the question I’m interested in here is: what possibilities would this foundation offer to enable tighter, and more elegant, integration of “learner-centric” social-networking functionality which is going to play an important role in education going forward, and the more traditional LMS focus on courses?

citeproc-js

Posted in General on March 13th, 2009 by darcusb – Comments Off

Just in case someone out there is interested in helping out, Frank Bennett has been working on a complete rewrite of Zotero’s CSL engine. The new code is designed along functional lines, and intended to be easier to extend and debug and to integrate into different kinds of contexts, as well as be faster. Code is in the xbib repository currently.

Citation Management Choices

Posted in Technology on March 13th, 2009 by darcusb – Comments Off

I’m noticing people in the blogsphere struggling with choices among what seem like a plethora of options in the reference manager space, with Mendeley the latest entry.

Without choosing any particular winners, here’s some characteristics that I personally prioritize:

  1. Can I store the data I need to store? For me, this goes far beyond journal articles and books, and can include a wide range of primary source material.
  2. Can I easily import and export that data, in a form where I can easily reuse it elsewhere? This potentially goes beyond the raw metadata, but can and should also include the documents I write.
  3. Am I confident that the software will be around for the long-run? If it’s proprietary software, this by definition means I have to be supremely confident in the company and its business model, because the risk is the software simply goes away. If it’s free software, I’m much less concerned, but certainly the quality of the code-base and the health of the community that develops and maintains it are important prerequisites. For this reason, I have a strong bias in favor of open source software.

The answers to these questions will vary depending on where you are in the academic universe. People in the sciences, for example, typically don’t cite anything but secondary literature: journal articles and such. But they may have more demanding needs in terms of markup for things like math. Humanities, people, by contrast, will often deal with a much wider range of sources, sometimes in multiple languages. Applications like Mendeley and Zotero are really built by and for different communities of users.

That said, it’d be nice if applications could work on the problem I earlier outlined, as well as imagine server-based solutions that were not centralized (see laconica), so that users had more flexibility to use different tools.

My University and the Web: Priorities

Posted in Research, Teaching, Technology on March 9th, 2009 by darcusb – 1 Comment

So my last post was outlining some frustrations I’ve been having with my university’s IT infrastructure and decision-making. But an obvious next question might be, what do I see as an alternative, and what do we need to get there? In no particular order, here’s what I’m thinking:

  1. Open standards support can no longer be an optional “nice to have” checklist item among a long list of other items. It has to be a central requirement. Right now, relevant web-related standards include CSS, HTML, XMPP, CalDav, IMAP, Atom, and so forth. Support for these standards means it’s easier to integrate different applications, and to evolve them to meet new needs. This evolution-friendliness includes making it easier to move to other solutions.
  2. In particular for important institution-wide web applications, open source needs to be the norm, and proprietary software, with all its monetary and innovation costs, the exception.
  3. If as an institution we believe in new models of learning that integrate teaching and research, and which put students and inquiry-based learning at the center of what we do, then our technology decisions should reflect that. To wit, while I have no definite technology ideas in mind, I do in general think:
    1. We really need to get away from the straight-jackets presented by dated and course-centered solutions like Blackboard. These present severe limitations on what we can do in the classroom (and beyond).
    2. As an alternative, I am really intrigued by student-centered social networking software like Elgg. I’m also encouraged that open source LMSs like Sakai are working on integrating similar kinds of functionality (see, for example, the whitepaper for Sakai 3 [pdf]).
    3. This learner-centered social-networking model probably ought not be limited to undergraduate education, but rather leave room for a more comprehensive online community that really reflects the connections across fields of learning and scholarship, as well as breaks down the barriers between undergraduate and graduate teaching and faculty research.
    4. A university-wide website redesign and CMS has to be built for all of this from the beginning; not added as an after-thought. In short, our web presence needs to be built on a foundation that is as dynamic and flexible as is learning and research in the 21st century. Old-school CMSes are not.

My University and the Web: Present

Posted in Research, Teaching, Technology on March 8th, 2009 by darcusb – 6 Comments

Last week, I attended an IT strategy council meeting, which included among the topics of discussion an update on the forthcoming university website redesign, and another on the place of open source software at the university. I was at this meeting primarily for the latter discussion, after having been recently asking a simple question, and not hearing an entirely satisfactory answer:

What is the role of open source software and open standards at this institution?
I was prompted to ask this question as a result of a confluence of three quite concrete frustrations that boiled over last term.

First, I (belatedly) heard of Miami’s move to a fully proprietary calendaring and email stack (MS Exchange). Despite all the obvious marketing hype surrounding buzzwords like “unified communications” and such, I knew this move really meant one thing: the institution was hitching its communications fate to a solution that would only work as promised for those users who—whether by choice or some compulsion—used Microsoft products. So without going into all the details, let’s just say I voted with my virtual feet: I effectively boycotted the new system, and now forward all my mail to GMail, which has a (much) better web client, and better IMAP support. Likewise, with its support of CalDav, Google’s calendar application has much better standard’s compliance than Microsoft’s. In short, then, Google’s standards-based solution works better for me than Microsoft’s closed one.

Second, my students and I became increasingly and painfully aware of just how bad our Learning Management System (LMS)—Blackboard—is. One immediate issue was online quizzes, which I had been using as a weekly way of assessment for students. But because of how poorly Blackboard is designed, transferring quizzes and tests across course semesters is both a) really awkward, and b) buggy. For this reason, I had to delay rolling out these quizzes in my large-enrollment class for roughly a month. And this was despite the fact that Blackboard had known of this bug for months.

So this semester, I’ve ditched the quizzes, and I’ve continued off-loading as much of my web content as possible outside of Blackboard. My syllabus is a separate XHTML file, that in turn links to other XHTML files for weekly assignments. My slides are all available online as well (though currently authenticated), again as XHTML files. In short, virtually everything is online, but the only thing I use Blackboard for is the gradebook, and course announcements.

Another issue presented itself in another lower-level class. This course is called Global Change, is team-taught, and focuses on learning about the breadth of geography through analysis of contemporary issues: climate change, urbanization, water, etc. My colleague and I had been frustrated with the consistency and quality of participation we were getting from students that were coming from increasingly diverse disciplinary backgrounds. So, we decided, let’s shift towards a group learning approach where the students drive much more of the content, and can collaborate on learning. The students, for example, gathered the majority of the topical readings for each module. But how to do facilitate this in Blackboard? Our awkward answer was to use the very awkward Blackboard blogging module. While we all saw great promise in the approach, we also felt held back by the limitations of the LMS.

So Blackboard sucks and I’m asking myself, why are we continuing to invest in this software?

The third issue that contributed to my pushing on this issue is my taking over as the director of graduate studies for my department. One of my jobs is to recruit good students. The most important way students find out about our program is through our departmental website. But our departmental web presence sucks because it’s too difficult for people to update content. What content should we be keeping up to date? Well, everything we do: the classes we teach, the works we publish and present, the research projects we’re working on, the students we work with. Ideally, we could easily keep this content updated, and it could in turn filter to the wider university. For example, if we publish information about a talk in our department, other people elsewhere in the university that might be interested in it should be able to be automatically notified.

There’s really no simple way to make this happen at my university. But thankfully the IT people here are reasonable, and so agreed to set up Drupal for us. I had them install some additional plugins such as for bibliographic management. I ultimately want something like this site, which isn’t really that hard to do with Drupal (notwithstanding that I’m really not a fan of PHP).

On the other hand, we have no real infrastructure on campus to make this easy. For example. there are no (good) university themes for Drupal. Instead, every new installation has to either hire some contractor to create one, or do the work themselves (as I am). This is really not good. It’s just too hard to create a decent web presence for an individual department or program. Yes, the IT people are very helpful, but they’re also overextended. Someone needs to give them the resources they need to make it easier on all of us, and so to promote the university’s mission better, and save money while doing it.

But, this issue goes back to the website redesign. Long story short: it seems a fait acompli the university will adopt a proprietary CMS. While I can imagine such a platform may well work for university level marketing and such, I have a really hard time envisioning how it would enable what we need at the department level. I also find is really hard to imagine how such a CMS will integrate in anything but very awkward ways with the learning that happens in classrooms and laboratories around campus.

So this is where things stand now, and I’m not terribly optimistic. My university has already made an expensive and multi-year investment in a proprietary email and calendaring system, and is about to make a similar investment in a proprietary CMS. These kinds of decisions will limit our flexibility going forward.

But the conversations will be ongoing, and there’s enough interest among forward-thinking people here to imagine that there may be room for exploration. I’ve got some ideas on this that I may explore in a future post.

Acid Test

Posted in Technology on March 2nd, 2009 by darcusb – Comments Off

Ian Davis, with his semantic web application acid test:

[T]he Semantic Web has some well-defined principles that can be used as tests. Here’s the first test: if you see one of these applications find one of its pages describing something that’s useful to you (e.g. a place or a person) and ask yourself what’s the URI of the thing this page is describing?

O, and his and Tom Heath’s quick intro to the semantic web is also very good and accessible.