The Hypermedia Research and the World Wide Web Workshop was held at Hypertext '96 in March of 1996. The overall purpose of the workshop
was to set the stage for better communication between the hypermedia
research community and the Web development community. Below is information
about the workshop and some of the results of it. Update September 3, 2005: I pulled a few key files out of my archives and added to my new site.
Participant position papers
- Applying
Hypermedia Research to the World Wide Web
- By Keith Andrews, Graz University of Technology, Austria. Drawing
upon experience with Hyper-G, Keith points
out the need for external link databases to help alleviate some of the
Web's current problems.
- An
Evaluation of the World Wide Web as a Platform for Electronic
Commerce
- By Dan Connolly, World Wide Web Consortium.
Dan evaluates the architecture and implementation
of the Web with respect to Douglas Engelbart's
requirements for an open hyperdocument system, which
are derived from experience in using
CSCW to support large scale electronic commerce.
- A
Web of Objects
- By Paul De Bra, Eindhoven University of Technology, The
Netherlands. Paul describes how, by using object oriented database
technology for the storage level of Web servers, it is possible to
incorporate many features envisioned by some well-known hypertext reference
models. Unlike Hyper-G, this would preserve the
architecture of the Web.
- Extended
Linking Facilities for the WWW
- By Gary Hill, University of Southampton, UK. Gary talks about how,
by adding some of the features of Microcosm to the Web, authoring
facilities and navigation would be improved. The Distributed Link
Service is presented as an example.
- Research
on Usability-Based Facilities for WWW Browsers
- By Stewart N. T. Shen, Old Dominion University, USA. With a focus on
developing browser features based on user needs (as opposed to advertiser
needs), Stewart describes ways to assist beginning users, how to help
experienced users find their favorite sites, and how to make it easier
to create personal annotations.
- Structured
Web Site Design
- By Daniel Schwabe, PUC-Rio, Brazil. Daniel describes the Object-Oriented
Hypermedia Design Model and demonstrates how it can be applied to a World Wide
Web site. OOHDM is comprised of four incremental and iterative activities, each of which
involves building a set of object-oriented models.
- World
Wide Web Benefits and Dangers for (Traditional) Hypermedia Research
- By Andreas Dieberger, Georgia Institute of Technology, USA. Andreas focuses
on three main issues: navigation (applying navigation research to the Web),
the user interface (the Web as a globalized user interface) and prototyping (using the Web
for hypermedia research). He stresses some of the problems of applying existing
research to the Web and using the Web for hypermedia research.
- The Eastgate Web Squirrel
- By Mark Bernstein, Eastgate Systems. Mark describes Web Squirrel, software to help
users manange all of their Internet resources. It is a great example of
how hypermedia research can be applied to the Web, since research on spatial hypertexts
was applied to Web Squirrel.
- Simplicity
and Extensibility: What we can learn from the Web
- By Roy Fielding, University of California, Irvine, USA. As a Web developer (Roy
wrote the HTTP specification), he first asks why he even wants to be at the workshop.
Although the position paper is not completed yet, you can gather from the outline
and title that its focus is on explaining why the Web has been successful and how
it will continue to be successful. It is simple by design and
extensible by design. So, incorporating more hypermedia research won't
be that hard. In fact, the designers are waiting for the help, in some sense.
Just be careful to avoid the "non-solutions" that come about
if you don't understand the social context of the Web.
Report on the workshop (by me)
(A version of this report appeared in the June, 1996, issue of the SIGLINK Newsletter.)
A one-day workshop at Hypertext '96 on Hypermedia Research and the World Wide Web was
held March 17 in Washington, DC. I organized the workshop and was very anxious
during the position paper phase before the conference: I had sent out over 30
personal invitations, but nobody was willing to participate in my workshop. I was
beginning to wonder if I was the only person who saw this huge gap between the
World Wide Web and hypermedia research communities. I even thought about canceling
the workshop because of lack of interest. But after extending the submission
deadline as late as possible and making a few more contacts, I was finally able
to get a good group of participants together and the workshop was held. Below
are some notes of what transpired.
As it turns out, the reason few people could attend my workshop was NOT because they
did not also see this problem. Rather, everyone was already talking about it
in other places. For example, both of the other workshops at Hypertext '96 spent time
talking about these same issues. The discussion I had with others in the hallway
during the technical program also told me that I was on the right track and the
hypermedia community was beginning to feel like it should take some action to forge
a closer tie to the Web community. Many people re-iterated the same questions: "Why
aren't the Web developers looking at the existing research? Why are they reinventing
the wheel?" I agree that the Web community is not paying close enough attention
to the existing research, but since the workshop (and after addressing similar issues
at CHI 96) I have developed comeback
questions for these: "Why isn't the hypermedia research community more active
in the Web? Why aren't they submitting more papers to the Web conferences?
Why aren't they applying their research to the Web to show how useful
their research is?" It is a two-way street, folks.
Anyway, back to the workshop. I had three main goals for the workshop:
- bring together hypermedia researchers and Web developers
- document hypermedia research's role in current Web development
- lay out the Web's role in future hypermedia research
Goal #1 was accomplished, even though I did not get the 50/50 mix of hypermedia
and Web people that I wanted. Representing the hypermedia research side were:
- Keith Andrews, Graz University of Technology and Hyper-G
- Paul De Bra, professor
at Einhoven University of Technology
- Gary Hill, University of Southampton
and Microcosm
- Stewart Shen, Old Dominion University
- Daniel Schwabe, PUC-Rio
- Andreas Dieberger, Georgia Institute of Technology and MOO/MUD navigation
expert
- Mark Bernstein, well-known researcher and conference organizer from
Eastgate Systems
- Marvin Pollard, graduate student at the School of Library and Information Studies
at Florida State University
Representing the Web side were
two big-time experts, fortunately for us:
- Dan Connolly, who works for the World Wide
Web Consortium (W3C is the "referee" in the competitive game between Web vendors) and
writer of the HTML specification
- Roy Fielding, graduate student at the
University of California-Irvine and author of the HTTP specification (Actually,
Roy could go in either camp, but he is more well-known for his HTTP work than his research
in hypermedia-based software engineering environments, so I classify him as a Web expert
in this context)
I want
to publicly thank all participants, but especially Dan and Roy, since not many members
of the Web community have realized that the hypermedia researchers do indeed
have valuable things to contribute to the development of the Web.
The participants submitted position papers before
the workshop. Everyone was
given some time at the workshop to state their positions.
We had worthwhile discussions on many topics. But what was most interesting
was when one of the hypermedia researchers would identify
a need for the Web (such as typed links) and then either Dan or Roy would come back with
"It has already been specified, has been part of the Web definition for years,
but we are just waiting from the vendors to implement it". It made me realize that
you cannot get a good understanding of how the Web might evolve by simply looking
at what Netscape Navigator does today and reading people's poorly-designed
personal pages.
You have to read the specs, see what new standards are being proposed, and you
have to demonstrate to the vendors why they should implement some old feature
from another hypermedia system. Why doesn't Netscape do typed links yet? Because
their customers haven't told them they want typed links yet. Because Netscape
does not see any financial reason to do typed links yet. It is up to the hypermedia research
community to demonstrate to Netscape that typed links are good for users and
for publishers and that they could sell more software if it utilized typed links.
Anyway, back to the workshop. Goals #2 and #3 were sort of merged together during
the workshop. After the presentations, our discussion focused on "opportunities
for improving the Web", a nice way of saying that in some ways the Web sucks. But
the difference between saying "it sucks" and "here's how to make it even better"
are important and crucial to getting people to listen to you. No one wants to hear
a bunch of "old farts" whine about the "good old days." But companies are ALWAYS willing
to listen to how they can enhance their products to make more money. So, we
discussed different ways to improve the Web, being sure to mention both the
existing research which suggests this is a good idea, and the existing Web
specifications which would make realizing this opportunity possible today. We had
to keep in mind that our audience for this list of opportunities was not other
researchers,
but rather the people at the big companies that are shaping the Web.
But we did not have nearly enough time. So, writing up the list of opportunities
has been left as a post-workshop exercise. We have made some progress so far, but
the going has been tougher than I anticipated. The first problem we ran into
was specifying all of the things that need to be considered when suggesting
an improvement. One should not just implement a new feature without considering
the implications on the infrastructure of the Web, the users, and everything
in between. Even the very best idea probably would not be worth it if it
had huge, negative effects on network bandwidth and browsers, for example. You
have to understand the entire Web to be able to make this kind of cost/benefit
analysis. So, we came up with a template
to fill out for each opportunity, being
sure to include sections for existing research and existing Web specifications.
There is also a heavy emphasis on users.
We also have a long, unorganized list of
opportunities. The list needs to have
different overlays on top so that opportunities are grouped in different ways. And
each opportunity still needs to be expanded. But I feel this is quickly becoming
too much for our little group of participants to handle. We could use some help
from others, so if you see an opportunity that tickles your fancy, feel free to
take charge of it and develop its criteria. If your opportunity involves the
HTTP protocol, contact Roy to see what he thinks. If your opportunity might
be implemented in HTML, contact Dan because he understands the history behind
HTML and knows what is likely to come in the future.
Our goal is to get the W3C to put its "rubber stamp" on this document and
have it presented to its members (the big vendors).
Also, one other document that we want to produce from the workshop is
a reading list for Web developers: the key papers and books that they could read
to make them aware of the larger field of hypermedia research and to make it easier
for them
to apply some of it to the Web. This list has not been compiled yet, so feel
free to contribute your ideas. Send them to me.
So, in summary, the seed has been planted for better communication between the
hypermedia research community and the Web community. But both sides will have to
work hard to water and weed the plant to ensure that it will grow, bear fruit, and
prosper. Please help out!
Recent comments