Not quite on the heels of why I don’t like Ebsco’s new visual search, parts one and two, there are suddenly all kinds of different ways to search for news and information to try. I’ll admit, I don’t totally get any of these yet. I’ve barely played with them, which is part of the reason for that. I think that they’re not fully ready to be gotten yet, though as well.
What’s interesting to me is this common thread running through all of these attempts — the idea that people searching want to see how their results connect to each other. They want to see connections and context. I think this is true, and I think that it’s something we have a hard time doing when we research, especially keyword-research, online. I’m liking the trend, though I’m still a little unclear on the execution to date.
First, from Google Labs
Google Experimental Search. If you have a Google account, and you choose to “join” this experiment, you get some additional options for your results. (A note – all of these images are to screenshots. You have to be logged in and part of the experimental search to see what I’m seeing)
The “info view” seems to be about refining your results. You can choose to focus on a particular location, or a particular period in time. Here’s the WGA strike search, refined by “Vancouver.”
I’m not sure exactly what the cool factor is with the timeline refining feature – it seems to pull out results about a particular time, not so much results that were from a particular time. So things like Wikipedia articles, which include lots and lots of dates tend to appear pretty high on those results, no matter which timeframe you try to limit to. I appreciate the concept behind these options, but really, I didn’t find nearly as much to play with as I did in the next two options, at least not yet.
Next, we have Silobreaker. From the site:
More than a news aggregator, Silobreaker provides relevance by looking at the data it finds like a person does. It recognises people, companies, topics, places and keywords; understands how they relate to each other in the news flow, and puts them in context for the user.
As you can probably imagine, the idea that it’s looking at things just like a person would is a little bit suspect. And from what I can see, it does better recognizing fairly concrete things like people and places than more abstract concepts or (especially) keywords that can mean more than one thing.
The default search is called the 360 search and it brings back a big bunch of different ways of looking at results. At the top is the expected list of articles and other resources, with things like photographs and YouTube videos in the right-hand sidebar. Below the fold, you’ll find the additional options:
Of these, I found the network view to be the most fun. It was really more fun for me to see the people and places that Silobreaker included in the network than it was for me to drill down to the articles and webpages associated with those people and places, but I can see where this would be valuable for certain searches. There’s also the “trends” view at the bottom, but I haven’t figured out why that’s cool yet. I don’t think I’ve been doing the right kinds of searches.
Finally, TextMap. From the site:
s a search engine for entities: the important (and not so important)people, places, and things in the news. Our news analysis system automatically identifies and monitors these entities, and identifies meaningful relationships between them.
Time and place are some factors TextMap uses to contextualize results, but its main point of organization is the “entity.” Do a search, and your results come back listed by “entities” – which can be people, places, companies and more. From the main TextMap page, you can also browse by predefined entities. Click on an entity – and your results come back clustered around that entity.
(And at this point, the word “entity” has started to look really weird to me)
Like Silobuster, TextMap’s options include a network view and a heatmap view. There is also a “reference time” view and juxtapositions between your entity and others.
There’s some awkwardness and “not quite getting it” pieces to all of these options for me. Part of this, of course, is from the fact that I just haven’t played with them very much. Part of it is probably that the underlying metadata won’t really support the types of visualizations they’re trying to provide well enough – or that the sites they’re drawing data from are uneven in their metadata, so the existence of the metadata is skewing what you see in the results. Still, the idea that the user needs and wants to see the contextualization, and the relationships between the information sources they’re using, is exciting.