Take any typical web page. What is it about? You interpret the meaning of its words and illustrations, and use that information to decide your next action.
When your browser reads that web page, it is only capable of a simpler “mental” process:
- It has an address from which to collect data
- It uses HTML to display the contents
- It assembles the data in some representation based on display formatting
- It has links to other addresses—other pages of data—it can go to
The computer sees the presentation but cannot recognize meaning. Likewise, you cannot tell your browser your situation, goals, or anything else that might help it behave more intelligently. It cannot be an agent on your behalf, locate novel information, or filter things out. It cannot make the experience easier.
We resolve these problems by coding “intelligence” into applications via user preferences, complex search algorithms, error trapping, task flows, and navigational signposting. These work as long as we have the resources to carefully create stand-alone applications.
What is the “Semantic Web”?
The Semantic Web vision is about embedding additional layers of meaning in web data, codifying those data in a standardized way to enable sharing, and then facilitating the creation of software that use those data for more than just presenting an attractive display. The goal of the Semantic Web, described by Tim Berners-Lee and his colleagues in Scientific American (May 2001), is “better enabling computers and people to work in cooperation.” This definition demands the involvement of HCI and usability.
The World Wide Web Consortium (W3C) is defining the “building blocks” for the Semantic Web, focusing on data formats, ways to describe relationships, and processes that make it possible to share more meaningful data across the Web. Significantly, in September 2006, “User Interaction” became a major architectural element, reinforcing the importance users will play in the Semantic Web’s growth.
What are the Usability Opportunities and Issues?
Let’s consider how we can enhance some common web activities:
Browsing for Information
If you have shopped online in the past two years, you have probably used “facets” for filtering—each selection from a set of categories narrows your list of items progressively, and they can be selected/de-selected in any order.
You can also browse interwoven sets of information using facets, creating a dynamic, user-driven navigation of subjects. This type of browsing requires clearly described relationships between data. If I browse a museum collection for French sculptors, I would see Rodin and his statue The Thinker. Faceted browsing means that if I start instead with figures of men, cast in bronze, by famous artists, I would still arrive at The Thinker, via a path that aligns with my mental model, experience, and task. Once there, I can navigate in another direction to get additional information about Rodin, his nationality, etc. Flamenco, mSpace, and Exhibit are among the projects working on this form of interaction.
This can be done in other ways, but the Semantic Web’s focus on relationships between data and common data formats facilitate this interaction, and means that anyone can share their data into the collection with little or no extra work. If someone discovers biographical information about the person who posed for The Thinker, it can be linked to the collection without reworking the data model. If there are descriptions of sculptors in other languages, they can be integrated into the same application—a property that is great for localization. Semantic Web structures means the data are available for more elaborate uses in the future as technologies develop.
However, if faceted browsing becomes commonplace, what interaction norms will develop? How does the user turn facet selections on and off, or rearrange navigation sequence? How many facets and terms should be presented, and in what layout? What labeling constraints should we apply? How much can be presented without confusion?
Performing Complex Searches
While search tools are more widely used, more flexible, and easier than ever, they also strain under the weight of so much web content being created every day, and more sophisticated users demanding more relevance in results. Search works well when the goal is either a specific, known item or an overview of a subject where the specific information is less important than finding (ital)something.
But try some of these searches in your favorite search engine:
- What are the capitals of the countries bordering the Pacific Rim? Which have had changes in government in the last three years?
- What Greek restaurants are open after 10:00 P.M. within two blocks of the Scandinavian modern furniture store on Georgia Avenue in Washington, DC?
- What are the problems with that new migraine treatment according to official sources?
One focus of Semantic Web researchers is integrating logical models of the subject domain with the search process. The goal is to interpret the meaning and relationships in the sets of terms that people use in order to enhance search result relevance, as well as to synthesize concepts and information from sources on different subjects to solve complex problems. To tackle the questions above, search engines must traverse underlying relationships between elements in the question. Hitting the collection of words is insufficient.
What are the usability challenges? How might natural language interaction feel? What will user expectations be while users are unfamiliar with such techniques, and then as they become familiar? How does an application interpret the user’s language and intentions? How can applications expose assumptions or interpretations to the user? Does greater transparency mean greater trust? Can scenario-based approaches help model situations and goals? If the “answer” is not complete or clear, what is the nature of the “refinement conversation?”
Using Vocabularies and Descriptions
Recently, I provided navigation and search methods to a collection of medical documents used by non-experts. On the web I found extensive, standardized medical vocabularies that could improve the information’s descriptions and searchability. I avoided inventing more than 20,000 terms (plus synonyms, misspellings, etc.,) and their relationships. I did not have to arrange for code to be written to digest this terminology. Client staff will not have to maintain it over the years. Even better, when somebody wants to integrate what they find in this resource with international medical journals, they need not start over and spend time coding and entering data. The data and relationships are consistent, and both commercial and open source tools will be available.
I still must edit and arrange more than 20,000 terms (and 3,000 non-medical terms) in a coherent, efficient way. How do I look at that many terms to see which ones I must refine, or to understand the cross-relations? I need new forms of interaction to navigate, edit, connect, and review large amounts of terminology. Fortunately, there are projects both inside and outside the Semantic Web community that look at these visualization and navigation challenges.
The more consistently we represent our data and develop tools that interact with the data, the better off we will be. We all need more usable tools, and more imaginative representations to navigate large sets of vocabulary and logical relationships.
Using and Interpreting Context
There are several aspects of the Web and computer software that seem “dumb.” Why do they ignore my current goals? Why do they forget my prior experiences and expertise when filtering what I do or look at? Why will they not recognize what I already did so I can skip redundant steps? Alternatively, if I alter my usual routine, will they stop suggesting that I follow inappropriate paths based on prior actions?
We need a “language of context” that creates relationships between experience, tasks, preferences, situations, and desired outcomes. Even simple things like filtering searches by location, situation, and experience level could make the Web much more relevant.
What formats should describe context so machines can understand these relationships? How do we encode what we know from one situation for use in another situation? These questions can be explored by Semantic Web researchers, but they must first be asked by people focused on the user experience.
Instructing Agents and Automated Tools
Do you change your preference settings in your favorite portals or news feed sites? I rarely do, although my interests and information needs change. Is it laziness, lack of engagement in my information choices, or usability problems? Could it be weariness with software that regularly reminds me I must pay attention and make several choices among several items I only partly understand?
While we might prefer using autonomous and semi-autonomous agents to reduce the time we spend on tedious tasks, we must consider how to maintain transparency of actions and a reasonable balance between knowing what agents are doing and being pestered by them.
As you work with various computer applications and websites, what are they doing and how much do you trust what is going on behind the scenes? Can you interpret what logic and rules agents apply and the outcomes they deliver? Is your privacy respected by the organizations to whom you give information and by those they give information?
The more that data are integrated, the more that different, separate pieces come together to create interesting patterns. How can we know when this occurs? Might it be possible in the future to create guidelines and rules that are integrated with our data, so when those data are used the rules are known? Is it possible for information about us to be available on the Web not as isolated records, but via agents whom we instruct with our expectations on use?
These challenging questions hit at the heart of user expectations, trust, and confidence. They are the kinds of questions that projects like the Policy Aware Web look at. They are not isolated to the Semantic Web, but arise now as we use the Web. As usability professionals, this is an important area to act as advocates for the “voice of the user!”
What if the Semantic Web Never Arrives?
The Web was architecturally simple but took years to develop. The Semantic Web is more complex, and will take many more years to develop. There is always a possibility that it will never “arrive” in the form as currently described.
Does that mean we should hold back?
No. The Semantic Web aims to solve problems that will remain no matter what technologies are finally deployed or what label is put on them. We should work on analysis, design, and evaluation concepts for:
- Visualization and navigation of large-scale web data
- Refined search and browsing experiences, with knowledge of context and goals
- Increasing personal interaction—the Web coming to me, in my current context, proactively supporting my goals and facilitating my relationships
- Agents that automate routine tasks with permission to talk to other agents on my behalf
- Greater collaboration and sharing with others
Early sites show promise using Semantic Web data formats and technologies. The Semantic Web aligns well with the range of new web techniques (such as “Web 2.0”) that seek to transform our online experience.
It will take time to address some of the challenges identified above. The Semantic Web community—both researchers and professional developers—seek an understanding of usability and design. We want them to make decisions and create interaction approaches that are usable. The usability community must be part of the Semantic Web conversation at the earliest possible stage.
For more information on the Semantic Web:
- Semantic Web User Interaction activities and projects
- IPGems’ collection of references and links to workshops, demos, and project sites
- The W3C Semantic Web activity area
Retrieved from https://uxpamagazine.org/semantic_web/
Comments are closed.