The Search Lounge

10/28/2004

Snap

Snap (No, not the old Cnet/NBC Snap. This is a new Snap)
Type of Engine: Popularity based on user data* and shopping.
Overall: Good.
If this engine were a drink it would be…a Bloody Mary. There’s different ways to make it; there’s lots of ingredients inside it; and you can see all the various colors and vegetables floating around inside whether you want to or not.

*I couldn’t think of a better way to say this, but basically Snap incorporates data into their algorithm about which sites users clicked on. In the search industry that’s known as CTR (click-through rate). If you know a cleaner way to express what kind of engine Snap is, please email me.

Intro
Snap uses click data, including which sites get clicked on and how many sub-pages are viewed by users, to help determine relevancy. They state they have data from 1 million users going back to January, 2004. This is an idea that has been kicked around and used to various degrees by other companies including LookSmart and Google AdWords (maybe others too). But Snap has taken things a step further; not only do they incorporate click-through rates but you can also manipulate your results by data from other users. I’ll explain more later.

--A quick aside: If you’re starting a new company why use the name of a deceased company that did business in the same industry? Snap.com, the departed search engine that was jointly owned by Cnet and NBC and later became part of NBCi, existed for several years. So I was very surprised to see that a new Phoenix had arisen from its ashes (there is no affiliation between the two Snaps as far as I know).--

UI and Features
Because there’s a lot going on with Snap’s UI I’ve broken this section into subsections.
The Home Page
The first thing you’ll notice is their cluttered homepage. It’s jumbled compared to the nowadays ubiquitous Google type of UI. But if you take a moment there’s some pretty good stuff to see. On the left of the page are a few summaries of popular queries: Top Products, Top People and Top Music. On the right are the latest articles from Snap’s Blog. In the middle of the homepage there’s a lot…there is a running tally of the number of searches done. There are also graphs; click on those and you’ll see all kinds of things like stats about Revenue, Search and Advertisers. Check it out. Seeing as how I have a weird hobby of analyzing search queries, I like the keyword statistics page, but I’m not sure the average user has much use for it. But it’s all part of Snap’s goal of transparency.

It’s certainly a different type of homepage, but I think they’d do better to put some of this information into their About Us section or elsewhere. When I first logged onto the Snap site I wasn’t sure if I’d reached a regular search engine or if it was something else like an enterprise search where I was being sold a search product. I am glad that they have all this information, but I wish they’d remove it from the landing page and put it elsewhere.

Related Keywords/Count
After you do a search you’ll see in the upper right corner a list of similar searches and their frequency. Interestingly enough, I remember that the original Snap.com (the Cnet/NBC one) had a similar feature that displayed related searches. I tried a search for library disaster plans and the related keywords were things like library of congress, floor plans, wet, libraries, etc. None of the terms were relevant to my search as stand-alone terms. I clicked on wet because I was curious about it, and of course the related keywords for wet were porn queries that I won’t repeat here.

Refining Queries
After you do a search there’s a search box called Type Here To Refine where you can search within results, almost like a “Control F” find functionality merged with a search functionality. Give it a try, it’s fun to play around with.

Columns
On the search results page there are several columns that appear along with the site results. Clicking on columns lets you sort results in different ways. This is a nice feature, but again it’s one that most searches really don’t need in my opinion. I can see its value for advertisers who may want to purchase a keyword –think of the Overture bidding model – but for the average searcher sorting by conversion rate isn’t necessary. (By the way, the founder of Snap is Bill Gross, the same guy who founded Overture.) You can even reverse sort by rank, but again I’m not sure why I’d really want to do that. The columns in the results are as follows (note: I started to define each one but then realized Snap has a nice glossary so I took it directly from them.):

1. No. of Clicks is the count of users in the Snap Network since January 2004 who did the same search you just did, and who then clicked on this listing. Typically, the higher the better. An asterisk* indicates that the data is estimated.

2.Average Page Views is the average number of pages of this site that were viewed by users in the Snap Network who did the same search that you just did. Again, higher numbers are typically better. An asterisk* indicates that the data is estimated.

3.Cost to Advertiser is the amount that an advertiser pays Snap for referring a customer who subscribes, purchases, bids, registers, or downloads.

4.Conversion Rate is the percentage of users who subscribe, purchase, bid, register, or download at an advertiser's site. You guessed it, higher numbers are better — they typically mean that this advertiser is better at fulfilling customer needs.

5.Domain is the top level domain of the site. Most commercial sites in the United States are '.com' sites. However, depending on the search term, there are often excellent results in other domains, such as .edu (educational) and .gov (government).

You can filter multiple columns simultaneously. For example, in a column that displays numbers, type in '>' (greater-than), '=' (equal-to), or '<' (less-than), followed by a number, and at the same time filter on a phrase in another column.

Shopping Queries
Shopping queries, like ipod, produce different columns. I could sort by price range, type of memory, amount of memory, weight, etc. These columns vary based on the shopping query, so if you search for laptops you’ll see columns like OS, screen size, etc. Very nice because what they’re doing is customizing the UI based on the query, which I love. If you click on one of the results you’ll get a nice preview lower on the page. Although all the results were relevant in that they were indeed portable music players, many of them were actually ipod competitors. It’s a common practice for shopping engines to do this, but I think it impacts negatively on relevance. If I wanted to see ipod competitors I’d search for something like portable MP3 players. Many commercial searches, such as bose wave didn’t get any special shopping columns. Not so good but maybe they’re still building their shopping list.

Corporate Queries
Another type of query that gets special treatment are companies like Amazon. The interface is different: there is a list of most popular sub-pages, a cached snapshot of Amazon’s homepage, a company snapshot, and news headlines. A nice feature, but again it’s inconsistent because when I searched for Nike I got a regular results page.

Logos
Next to each result there’s a spot for a logo. It’s at the domain level so that AOL member personal pages have the AOL logo even though they’re not corporate-sponsored. But as with other things on Snap this is inconsistent. For my Nike query the only sites that had logos were a sub-page from MSN and a sub-page from Amazon.


Query Examples
I cheated a little and included several query examples in the UI and Features section in order to illustrate Snap’s features. But in terms of relevancy, Snap is pretty good. For my library disaster plans query the results were good. Though I should point out that result #1 wasn’t so good because it’s a FEMA information sub-page titled “library.” But I can see why Snap returned it so I’ll let it slide. I reordered results by number of clicks and things looked slightly better because the FEMA site went away. Though again I should point out that for this obscure query the highest number of clicks was only 5. For context the highest number of clicks for Nike was 1,735 followed in second place by 917.

I tried cliff house restaurant san francisco and the results were good. The official page was #1, though it’s interesting to note that it had 1 click whereas the second result, a Yahoo Travel restaurant review, was highest with 8 clicks. This is a good example of when click data can be misleading for algorithms. Even if more people click on the review the official page needs to be the first result. And Snap got it right.

Conclusion
I like what Snap’s trying to do, but I fear they’re overloading the average user with too much information. They’re showing the guts of their technology rather than incorporating it seamlessly behind the scenes. After you do a search and are waiting for the results, which can be slow, the screen will say things like Preparing Data for Display, De-Duplicating Listings and Getting Top Listings. I appreciate the honesty but I’m not sure it enhances my experience.

The columns are fun to play around with but I think they take up valuable real estate that could be used for displaying more metadata and text about the sites. Maybe Snap can play around with its UI the way A9 has and make it more customizable so that I can eliminate or add columns as I see fit.

They also need to take some of their nice features, such as the differentiated results page for shopping queries, and roll that out for many more terms. I’ll give them the benefit of the doubt and assume they’ll get to this soon.


I want to say one more thing though: Snap is pushing things and I really appreciate that. I like very much that they’re making an effort to be unique and I will definitely pop in periodically to see how they’re doing and what new innovations they've implemented.

10/20/2004

Netnose

Netnose
Type of Engine: User ratings.
Overall: Needs Improvement.
If this engine were a drink it would be…a wine cooler. I like wine and I like juice, but the two together aren’t so great.

Intro
Netnose offers the ability for everyone to rate the relevancy of search terms to specific web sites. If I understand it correctly, sites only appear in Netnose results if they’ve been rated as relevant by a user. I guess that explains why so many queries I tried performed very poorly; the content simply had not been added to their database. I like the idea of user ratings, but a little-known engine like Netnose needs to do more to supplement their immediate relevance, otherwise users won’t come back. Right now it seems like they’re waiting for the community to do enough to make it a usable search tool, but they’re far from that goal.
I almost hate to give them a Needs Improvement rating, but my perspective is that of the searcher and relevance is my goal. So although the technology and goals of letting users add ratings may be admirable, I won't be totally convinced until I see the relevancy improve.

UI and Features
One thing that bothered me was how do I rate specific sites rather than random ones? Let me explain. If you click on rating you’ll go to a page that lets you rate now. A Javascript pop-up window then opens along with a random web site. On the pop-up window there’s a selection of suggested query terms for the site. There’s also a ratings scale of bad, fair, good, better or best. You can also add commercial and adult tags. So you rate each term and then go to the next random site. Generally the suggested terms were decent. OK, so far so good, I suppose. But what about when I do a search? I want to be able to rate the search results right then and there and I don’t see a way to do that. You can assign a category at that point, but not a rating other than dead link/totally irrelevant. It seems like mapping the search results to the rating process would facilitate the process and garner more ratings.

The site submission page actually lets you enter metadata for newly addes sites, including 5 search terms and a category. Although I love the category idea, the categories don’t make a whole lot of sense to me: Just for Kids, Research, How To, Entertainment, Real Time Stats, Shopping, Business Related. Go ahead and try assigning one of these categories, it’s not always easy. There are also questions about whether the site sells products, has adult content, uses pop-ups, or has paid content. Letting the public add metadata is a noble idea and I’m far from giving up on it as viable, but having worked for two different search engines I know first-hand how devious spammers can be. Though I should add that sites don’t go live in search results until other user(s) rate them. All very interesting ideas similar to what some other community sustained sites have done, such as Zeal and ODP. Though those two directories are more browsing-based and Netnose is purely search-driven.

Query Examples
The relevancy is not so good. In fact, some times it’s very bad. For example, I searched for Barry Zito and the 1st and 4th results were about Barry Manilow. The 2nd was about Dave Barry and the 3rd Barry Choral Society. So then I went to the Advanced Search page and required both terms Barry and Zito and then I got no results. That’s an outcome of no one having mapped Barry Zito to a web page about him. It would be better to default to fall-through results from another engine than to show me sites about Barry Manilow that only match the first term of a two-term phrase query. Bad.

Try doing a search and then refreshing the browser. The ordering of results slightly changes each time. Odd…

Conclusion
I like the idea of Netnose, but for something like this to work there really has to be a critical mass of users working diligently to make a difference. Right now I don’t think that’s the case. Letting users rate sites is an idea that’s been discussed for years. I know Google played around with a toolbar rating system. I don’t think that ever caught on, but does anyone else know differently? It’s a chicken and egg thing here, because you won’t get enough users interested unless the search is good, but the search won’t get good until enough users rate sites.

10/18/2004

Entireweb

Entireweb
Type of Engine: Basic search engine.
Overall: Average.
If this engine were a drink it would be…a glass of brännvin, as in Swedish schnapps. A traditional drink that may never make it into your repertoire but it’s good to know it’s out there.

Intro
Entireweb is a Swedish engine. It’s basic and straight-forward with a few nice touches. Nothing on the front-end drastically sets them apart.
On their About Us page they state very clearly that the search is currently a Beta version, though I don’t know how long that’s been the case.
Their spider is called SpeedySpider.

UI and Features
Entireweb offers several ways to refine queries. There are more than 30 languages to choose from, or you can limit by continent (when did Scandinavia become a continent?), domain, country or a combination thereof.

There’s an advanced search page for Boolean searching. One option they have is “How many rows of 'Page Content' to show. This is s a nice feature though at present the max is 5 lines. I’d be interested in playing around with setting it higher than that. I’ve always felt that site descriptions on search engines is a major area for improvement.

I like the language and region metadata they show on search results page. Here’s an example (I snipped description and page content to save space):
1. Brad Friedel - Latest News Headlines [SpeedyView]
Description:
Page content:
http://www.myfootballnews.co.uk/Brad-Friedel-Player-News-267.html
Language: english Region:uk
Related links:
http://www.myfootballnews.co.uk/Blackburn-Rovers-Team-News-10.html
http://www.myfootballnews.co.uk/player-news.html

I like the language and region tags. I’m not so excited about Related Links, but that’s because for all the searches I did, I never saw URLs that were not sub-pages from the same domain. So instead of Related Links, it should be called More results from same source. Speedy View is a page preview feature.

Check out their Speedy Spy page. On it you’ll find their Top 20 queries and also the latest 20 queries. I kept entering a search and then quickly refreshing the Speedy Spy page, but I never did see my query appear on the list. I appreciate it though when engines give insights into user behavior.

Query Examples
I had a little soccer injury last week so I searched for “strained calf muscle”. I wasn’t too impressed with the results. The first couple seem to be from commercial sites and when I’m looking for medical info I really want a trusted source. Some of the other results were OK, but they were focused on the knee or Achilles heel with the calf being mentioned in passing. A better results page would have had content specifically about calf muscles from non-commercial medical sites.

The number of results is always pretty low. I searched for “huckabees” and got 4 results. All four were about the movie. The first two were very relevant and the second too were too specific for my query. But where is IMDB? Where is the Apple.com trailer? Where are the movie reviews? And none of the results had the word “huckabee” anywhere in the visual display text and that’s bad.

Conclusion
I’ve had fun reviewing visual and clustering and music and meta-search engines, so it’s nice to return to a classic, AltaVista/Google/FAST/Wisenut/Teoma search engine. Entireweb seems to have a good base to build on. They say they’ve been around since May of 2000, but that in 2002-2003 they redesigned their search technology. So, I’m not sure where that leaves us now at the end of 2004. Is the 2002/03 search the beta search that’s live? If so, that’s not very impressive. I don’t need them to revolutionize their UI or search features, but their relevance needs some improvement. They need to build up the size of their index as well as improving the order of results.

10/15/2004

Mooter

Mooter

Type of Engine: Visual and clustering.
Overall: Good.
If this engine were a drink it would be…an Emu Export, it's Australian, has a funny name, I'd never heard of it until very recently, and it's a safe bet that you've never heard of it.

Intro
Mooter is a visual clustering search engine and I like it. They’re from Australia and have been live only about a year.

According to their Technology site, which actually provides some useful information about what they’re up to, “Mooter gets it results from its own spidering, and a unique index of websites. While we are growing, we are supplementing our index with metasearch, and comparing the results from various engines before applying our analysis algorithms.” This is an interesting statement and I’m not exactly sure what they mean by it. If I had to guess it sounds like they’re spidering other engines’ indexes to create their clusters. Is this different from what Clusty or Clush does? I’m not sure, but would love to know the answer. Please email me if you know. It sounds like they plan to generate an entire web index, but that could be wishful thinking.

UI and Features
For the most part I like their interface, it’s simple and almost cheesy, but somehow likable. The Overture supplied Sponsored Links are killing me though. When you click into a cluster, the Sponsored Links take up nearly half the screen; bad, very bad.

You can click “All Results” to get the full list of results. Mooter maxes out at 120 results, or at least I didn’t find any queries that produced more than that.

If you don’t like the first cluster you see, click on the “Next Clusters” icon (the icon needs some improvement; it looks like a cluster of red pimples) to see another cluster.

Query Example
For phrase searches, each word usually becomes a cluster. For the search “William Styron” one of the clusters was "William." Not good, but then I clicked on the cluster link and the sites were indeed about William Styron, and not just any old William. But still a "William" cluster doesn't really help me.

Conclusion
Even if the name of a cluster doesn’t sound relevant, the links contained therein were generally on target. So I’d say they’re getting the back-end organization of clusters correct, but what they need to do is improve their cluster names and concepts. Maybe more phrase matching rather than pulling out just single terms, as if I know what I’m talking about.

They could also make the visual part of their results more compelling. As it is right now, it almost doesn’t need to be visual because the visual part of it doesn’t add much beyond novelty (and even the novelty is wearing off as more Kartoo-style visual engines appear).

10/14/2004

BrainBoost


BrainBoost
Type of Engine:
Natural language.
Overall: Very Good.
If this engine were a drink it would be...a Jack and Ginger. An old-time favorite search type that tastes refreshing after not being tried for a long time.

Intro
Don't be annoyed that there's no space between the words Brain and Boost. Instead, go ahead and ask BrainBoost questions in plain English and you’ll get answers in plain English. BrainBoost boasts that it’s completely automated and uses no human editorial invention. I guess that’s impressive since it's pretty good as it is, but I still think any engine can only get better if editors are used in some capacity. And just because AskJeeves uses editors don’t let that fool you. It's commonsense to me that a successful combination of the two approaches would be best for relevance; but I digress.

The legend goes that BrainBoost was created by 24-year-old software programmer Assaf Rozenblatt. It took him a year to build it and he built it so that his fiancé could better do her college research. (And all I gave my wife was this search engine review blog. Ouch.)

BrainBoost is honest. What does that mean? It means that when it doesn’t know the answer it doesn’t pretend it does know. Usually. Of course it’s not perfect and you get your share of false positive matches, but generally speaking it’s solid.

UI and Features
The Snap Open feature is cool because it opens to the relevant part of the listing; think of an anchor tag where it takes you right to the text on the page that answers your question.
Otherwise it’s all pretty straightforward.

Query Examples
I wanted to know Tony Gwynn’s lifetime average, so I asked:
what was tony gwynn's lifetime average? and I got no results. Strange, since that should be a relatively easy one.
So I refined my query: what was tony gwynn's lifetime batting average? . This time I got 2 results and the second one has the answer displayed right there on the search results page: .339 (turns out from other results I looked at that it was actually .338, but that's certainly not BrainBoost's fault). I didn’t even have to click to the site. Now obviously that has potential repercussions for all the engines that make $ by driving traffic to sites. But for now I'll stay out of the financial fray.

For my next search I wanted to find out how much an annual subscription to Smithsonian magazine costs. So I queried:
how much is a year's subscription to smithsonian magazine? I got 2 results that both had to do with getting a subscription as part of donating to an organization. Not good.
A little refinement was in order:
what is the annual cost of smithsonian magazine? returned no results.
Better try again, but this time I gamed the system by using a keyword-based phrase query instead of natural language. subscription to Smithsonian returned 6 results, 5 of which answered my question. Though with this one I had to click on the actual results (man that sounds lazy) to see the answer to my query because BrainBoost's display text didn’t show me the answer. That’s par for the course with search engines, but I was hoping BrainBoost would display the answer right up front to this query. And in case you’re wondering, how much is a subscription to Smithsonian? had a very similar result set, though it missed one of the results from my previous query and the display text for the same results was different. But the point is I was able to query this by keywords and by natural language and get good, though slightly different, results.

And of course everyone enjoys a good laugh at the expense of natural language engines, just like we all enjoy laughing at translation engines. So here’s a good one: what is a sous chef?. The first result:
-The sous chef is legal. Hes an American.
But then a few results down is this great display text:
A sous chef is a chef ranking above line cooks and below an executive chef or chef de cuisine.
Nice!
Conclustion
Good stuff. I’m a fan and will use BrainBoost when that nagging trivia question hits me, like what’s the population of Wales? . You’ve got to see that one for yourself, because it’s really good.

Clusty

Type of Engine: Clustering.
Overall: Good.
If this engine were a drink it would be...an Anderson Valley Oatmeal Stout. It's thick and sweet but most people won't ever get to taste it.

Intro
Clusty is a new search tool made by Vivisimo. It’s a meta-search engine so all results are provided by other search tools. Its distinction is that it clusters results so that you can refine your query by clicking on a more focused topic.

UI and Features
Right now in Web Search there’s a drop-down menu that allows you to cluster as follows:
Source – by engine.
Topic – the heart of Clusty which is clustering by subject.
URL – sort by .com, .org, etc. Also by country code and occasionally by a particular domain though it seems inconsistent as to when this shows up. This is an interesting feature, but I’m not really sure what to do with it just yet. I suppose if I were doing a search for government documents it might be useful to look only at .gov results.

Other tabs, such as News and Shopping, have different clustering drop-down options. I’ll let you explore each of these on your own.

Not only can you set which tabs are seen, but you can also customize which sources are searched. For example, for News searches you can choose Reuters, BBC, CNN or other news sources. Very nice.

If you click on News, Encyclopedia or Gossip, then Clusty will generate a page with related content on it. This is a helpful feature but the front-page of Clusty should let you know about it.

To see specific recall information for each engine that was searched, click on the Details link above the results.

Icons – in the search results you’ll see the following helpful icons:
New window – opens result in new browser window.
Show in cluster – this highlights which cluster on the left contains the site.
Preview – opens the site within Clusty’s search results page.

Query Examples
The more I played around with Clusty the more I liked it. For example, try searching for tickets to an event. I tried the query Black Rider tickets, as in the Tom Waits play, and thought the clusters were pretty good because it successfully showed me a selection of sites where I could buy tickets to the play.
Clusty currently errs on the side of higher recall for its clusters, so many of the clusters are irrelevant. This can be OK if it leads to discovery, but I think the major area Clusty can improve upon is tightening the relevance of the clusters. For my Black Rider tickets query one of the clusters was Game which meant nothing to me. Another ambiguous cluster was Your tickets ready. Both of these clusters were poorly titled and the results contained within were not very relevant. I know, it's only one query example and you can't judge from just one query. But feel safe that I've conducted many other queries and most of them have been similar to Black Rider tickets. There have been useful clusters as well as clusters that make you scratch your head.

Conclusion
Clusty clusters on a horizontal plane and that's what they do well: they surface information which lets you scan the breadth quickly. However, I'd like to see them go a little bit deeper into the categories. I didn't see any categories below the second level. In other words, you do your search, click on one of the categories, and then can click on more category and that's all. I certainly don't want them to get carried away by overloading the depth of categories, but I think a couple more layers would be helpful in some cases.
I really wish they'd chosen a better name. Why would you ever name a search engine something that rhymes with lusty? But nonetheless, they're on to something here. I think clustering is a powerful tool that can let searchers discover similar topics or refine their queries. In the future I hope clustering engines will combine with human-created web directories.

KaZaZZ!

Type of Engine: Meta-search.
Overall: Needs Improvement.
If this engine were a drink it would be...Bud Light, the wingman.

Intro
KaZaZZ! is a search tool that as far as I can tell doesn’t really offer much by way of useful differentiation. Their claim to uniqueness is their Search Mind Reader which pre-populates queries based on analysis of previous queries you’ve done. Maybe there’s a future to this if the query suggestions can really pinpoint my interests, but even then I’m not sure. Just because I searched for something today doesn’t mean I’ll be interested in the same topic tomorrow. And even if you looked at my search queries over time, how would that help predict future interests? Maybe I’m missing something here, but with their current implementation it’s not useful to me.

UI and Features
KaZaZZ! is a meta-search engine with a selection of tabs to focus your search. One thing they do is they limit the number of results returned. A search for Santa Cruz only returned 80 sites. Personally I don't think limiting results is the right answer to increasing relevancy. It seems the sites that get returned are the ones that multiple engines have agreed upon.
On the search results page, check out the right hand side. Most of the refinement links are generic, such as News, Pics and Shop. But the one that got my attention was Music. It offers links to music files, though unfortunately for the query I tried, Mark Lanegan, most of the results were dead links. A nice idea though.

Oh, and Kazazz has a men’s search engine called MaZaZZ! and a women’s search engine called WaZaZZ! . Oh boy. Since I don’t have too much of a search history with their sites, the pre-populated Search Mind Reader queries for MaZaZZ! were exclusively female actresses and models. And for WaZaZZ! the queries were about dieting, beauty and fashion. Groan. The fashion queries were things like Guess, Benetton, Estee Lauder, etc. In other words, they’re trying to send users to shopping sites.

Conclusion
Another meta-search engine joins the fray. I’m not convinced they offer anything particularly useful, though it’ll be interesting to see what they do with the Search Mind Reader functionality in the future.

Musicplasma

Type of Engine: Visual music search.
Overall: Average quality, yet still very enjoyable to play around with.
If this engine were a drink it would be...a mint julep. It's not your everyday drink, but you'll find it a sweet break from the norm.

Intro
Musicplasma is a music search tool that lets you discover music artists similar to ones you already like. Oh, and it’s visual, like Kartoo.

I’m not really sure how they determine similarities. If I had to guess I’d say they base it on an ontology of genres (rock, rap, etc.), and on mining something like Amazon’s “Customers who bought that, also bought these…” type of functionality.

UI and Features
You can zoom in or out on clusters, thereby focusing or expanding your view of similar artists.

Clicking on the links – those ethereal lines – scrolls the page in that direction. Nice feature!
Clicking on other clusters will refocus the clusters around that artist.
The Design panel allows for changing colors and other appearances if you're into that kind of thing.

Query Examples
Sometimes the clusters make total sense. Try a search for Guided by Voices and the closest cluster will be Robert Pollard, the lead singer who’s done solo albums. Sometimes the clusters are a bit off. Try searching for David Byrne and for some reason Paul Westerberg – lead singer of the Replacements – comes between Byrne and the Talking Heads. I’m not saying that’s incorrect, but my first reaction was surprise. It could be accurate that people who like David Byrne’s solo stuff, which doesn’t sound much like the Talking Heads, might like Paul Westerberg, Warren Zevon and Roxy Music (all closer than the Talking Heads).
I noticed that powerhouses like the Rolling Stones and Neil Young shows up in lots of places. I searched for Prince Buster, the 60s ska pioneer, and there’s Neil’s cluster. A search for Bad Brains similarly showed the Stones lurking one link away. Now obviously Neil Young and the Stones have influenced tons of groups, but I'm not sure that Bad Brains should be one link away. Anyone know why that would be?

Conclusion
I’d like to know more about the links. Is one artist linked to another because they collaborated? Or are they linked because they play similar music? Or are on the same label?
OK, so it’s fun to play with, but give me some song samples.
How about letting me type in more than one group so I can really focus in?
Focus by time period. I really like early Stones, when they sounded like, say the Small Faces, but I hate recent Stones, when they sound like, say crap.

Musicplasma is fun to play with, but it needs to be more practical. Take the visual music search engine and turn it into an audio search engine. If that’s too far-flung, then at least show more context on how artists are linked. But like I said, it sure is fun...

A9

Type of Engine: Meta-search.
Overall: Very good.
If this engine were a drink it would be...a martini. It's not so much about the complexity of the mixture, but it's always classy and it gets the job done.

Intro
A9 has implemented some really clever UI features. At present they’re searching Google’s Index for web searches, but seeing as how they’re owned by Amazon I expect bigger things from them in the future. Because they're using Google I won't go into relevancy. The UI is the first thing you’ll see and it’s what sets them apart, especially since right now they’re using the Google Index and (I think) the Google algorithm with some slight tweaks. The number of results sometimes varies, though page 1 results on A9 and on Google seem identical.

UI and Features
You can focus your search in the following ways:
Web – searches Google
Books – searches Amazon
Images – searches Google
Movies – searches IMDB
Reference – searches GuruNet

Some nice things worth pointing out
The UI – the tabs are very easy and load right away so that you don’t have to wait for a new page to load.
Amazon Discount - If you search on A9 and then go to Amazon.com to buy something, they’ll cookie you and give you a 1.57% discount on purchases. I pleasantly discovered this since they don’t seem to advertise it.
Search History - They maintain a list of your search history that you see right away when you visit A9. I didn’t think this was the most useful feature at first, particularly now that browsers save searches, but I’ve actually used it to recall previous searches.
Diary – this function lets you leave notes about sites (I use Mozilla and you need IE so I haven’t been able to do this yet, but it sounds cool).
Bookmarks – they have a built in web-based bookmark function. Personally I don’t see why this is better than using my browser, but that’s because I don’t spend much time on public or shared terminals. For people who use different computers this is a valuable feature.
Discover– similar, I think, to Amazon’s “You’ve bought this, so check these out,” but with web sites instead of products.
Preferences – you can set language default, filtering level, font size and colors. I’m guessing they’re still working on adding advanced features here but they may be limited by searching Google’s index for general web results. Just a guess.

Conclusion
A9 is part of my regular repertoire. I'm hoping they build their own web index and algorithm, then they'll vault into the elite ranks of search engines. If that doesn't happen they've still got one of the finest UIs out there.

welcome to the Search Lounge

Hi,
Welcome to Search Lounge's new URL. The Search Lounge is a place for me to post subjective reviews of search engines. I hope you find these reviews useful.

Thanks and enjoy,
Stingo.