Simplicity: Sparsity or Storytelling?

A tweet by @akumar prompted me to punch up this quick blogpost:

as with all controversial issues, there’s a positive in google trying bing/image – that they’re not afraid to learn from competition

What Amit is referring to is the recent addition of gorgeous photographic images as search page background.  See for example this writeup: http://blogs.abcnews.com/theworldnewser/2010/06/google-vs-bing-copycat-picture-on-prominent-page.html

He is of course correct; Google is learning from the competition.  But there is another issue at play here, one that I don’t want to overlook because I feel it is very important.  It is the issue of simplicity.  What is simplicity?  How is it defined?  How is it measured? Conversely, what is complexity?  What is clutter? Continue reading

Posted in General, Information Retrieval Foundations, Social Implications | 15 Comments

Seeing Stars

There is an interesting blogpost on the Official Google blog today, about seeing stars:

We’ve long believed that personalization makes search more relevant and fun. For nearly five years, we’ve been tailoring results with personalized search. Today we’re announcing a new feature in search that makes it easier for you to mark and rediscover your favorite web content — stars.  With stars, you can simply click the star marker on any search result or map and the next time you perform a search, that item will appear in a special list right at the top of your results when relevant. That means if you star the official websites for your favorite football teams, you might see those results right at the top of your next search for [nfl].

So it sounds to me like this is a sort of bookmarking.  What it not as obviously, however, is what this sentence means: Continue reading

Posted in Explanatory Search, Information Retrieval Foundations | 3 Comments

Embark Together

I would like to quickly follow up on my previous post on explicitly collaborative information seeking.  My claim in that post was that, despite the shared terminology, a service like Aardvark (or Twitter) is not truly collaborative.

Let me be clear about Aardvark: What that service does is help you comb through a network of people to find those individuals who have the highest likelihood of holding the answer to your information need.  Somebody has the answer; you just don’t know who it is.  So Aardvark helps you find that somebody.  The reason this is different from what I am talking about with explicit collaboration is that in this latter case, you already know who it is that you want to work with on resolving a shared information need.  You want to work with a relationship partner on finding an apartment.  You want to work with a business colleague on finding potential markets for a new product.  You want to work with some buddies on planning a road trip.  In all of these situations, your partner, your colleague, and your buddies don’t already have the answers that you seek.  But you do know that you want to work with them to find those answers because they have the same need that you do.  Your partner wants to live with you, your business colleague wants to work with you, and your buddies want to travel with you.  This is what explicitly collaborative information seeking is about, and it’s not the same thing as the “collaborative” category discussed in the panel.

Case in point: Take a look at the panel’s slides: http://www.slideshare.net/bmevans/introductory-slides.  Slide 9 outlines the two main social strategies: (1) Ask the network, and (2) embark alone.  This misses a third major, but as yet untapped, strategy: (3) embark together.

A good way to think about this is in terms of information seeking.  In both the (1) ask the network and (2) embark alone strategies, there is only a single user with an actual information need, a single person who is actively seeking information.  Using Aardvark, he or she is asking other people in the network if they are able to give an answer to satisfy that need.  But those other individuals do not actively share your information need.  They already either (1) have the information that you seek, and thus already have a satisfied information need, or (2) do not have the information you seek, but do not care, i.e. they do not share your information need (they aren’t going to move in with you, or go on that road trip with you).  When you ask the network, you are not actually involved in collaborative information seeking.  There is only a single seeker: You.  You are simply tapping into the network to find those people who already have the information you need.  It is still the single individual, not the network, that has the information need and that is actively engaged in the seeking process.

But embarking together with one or two other individuals who also lack information, i.e. engaging in explicitly collaborative information seeking, is a entirely different process.  In this case, there are at least two information seekers, two people who have a shared, as-yet-unsatisfied, information need.  Now, there are a number of different ways you can build systems and design interfaces to support these multiple seekers in their task.  I’ve written a lot about such systems on this blog and on the FXPAL blog, and will not go into it in further detail right now.  The point is simply that embarking together is an information seeking strategy that was not covered by any of the existing methods.  It is not the same as asking the network.  It is not the same as embarking alone.  It is a third process, a third strategy, and one that remains quite untapped in today’s marketplace.

Update: I have a final quick example.  On his live blog, Danny Sullivan paraphrases Max from Aardvark: “We want to do that across communication channels, so you can find partners to go bike with”.  That’s Aardvarkian social search: You want to find the people to go biking with.  Collaborative search is the next phase.  Once you’ve found the people that you explicitly know that you want to go biking with, how do you find out where you want to go?  You know about all the bike trails around your house.  Your new biking partner knows about all the trails near her house.  But neither of you know about the trails that exist halfway between both of your houses.  Ideally, you’d like to find one of those trails that is good for both of you, because neither of you is aware of them.  (And why should you have been? Before meeting your partner, you had no reason to venture away from your favorite nearby trails.) THAT is explicitly collaborative information seeking.  When both of you actively look for new bike trails, that is embarking together.

Posted in Collaborative Information Seeking, Social Implications | 3 Comments

Don’t Forget Explicitly Collaborative Information Seeking

A panel on Social Search is happening at SXSW right now.  Reading Danny Sullivan’s liveblogging, I came across the panel’s definition of the three distinct types of social searching.  And I think they left one out:

The version that was left out was the type of search in which you don’t just ask a friend for an answer (e.g. Twitter and Aardvark), but the type of search in which you actively engage with a specific person to work on on a jointly-shared information need.  For example, imagine a couple looking to rent an apartment.  It’s not like one person in the couple can ask the other one “where should we live?”  The point is that both people do not know.  And so you can imagine an information retrieval system that has, built in, the capability to be multi-searcher aware.  Both people can work on the same task at the same time.

This is not what Aardvark does.  This is not what Twitter does. This is also not friend-filtered; this is also not collective.  It is a fourth type, a distinction that seems to have been missed by the panel — search in which a small team of people actively work together, and the search system actively mediates between them, helping the group as a whole find information that no individual already knows, and that no individual would have easily found, had that person been working alone.   For more information on this oft-ignored area, please see our earlier series of posts.

Posted in Collaborative Information Seeking | 2 Comments

Search in Social Media

What is Social Search as opposed to Social Media?  Social Search in Media?  Search in Social Media?

Next week, Gene Golovchinsky and I are moderating a pair of panels at the SSM workshop.  So we spent some time this week asking ourselves these definitional questions in preparation for the panel.  We came up with a lightweight taxonomy, and have done a few classifications/examples of existing systems into that taxonomy.  Whether or not you are one of the 80 participants in the workshop, I would invite you to take a look at our framework and comment or critique where necessary.  Here’s the link to Gene’s writeup:

We think the phrase ’search in social media’ has been used to refer to both the information being searched, and to the process for doing so. The information is standard user-generated content — tweets, blog posts, comment threads, tags, etc. The process, however, seems less well understood…It will be interesting to see how these ideas will be transformed by the discussion at the workshop. In any case, having a language with which to talk about phenomena is a prerequisite to articulating a research agenda, particularly in a young and multi-disciplinary field.

Please note, however, that one topic that will probably not be covered is the difference between social search (process) and collaborative search (process).  The latter workshop will be held a few days later at CSCW.  For an interesting thread on the distinction between the two, please see another FXPAL post from March of last year.

Posted in General, Information Retrieval Foundations | 1 Comment

Kasparov and Good Interaction Design

A NYT books article about Kasparov and chess, and the relationship between humans, machines, and decision processes is making the Twitter rounds today.  I don’t have time at the moment to write a long comment about it, but I do want to point out that it supports a position that I’ve been taking on this blog for some time now:

This experiment goes unmentioned by Russkin-Gutman, a major omission since it relates so closely to his subject. Even more notable was how the advanced chess experiment continued. In 2005, the online chess-playing site Playchess.com hosted what it called a “freestyle” chess tournament in which anyone could compete in teams with other players or computers. Normally, “anti-cheating” algorithms are employed by online sites to prevent, or at least discourage, players from cheating with computer assistance. (I wonder if these detection algorithms, which employ diagnostic analysis of moves and calculate probabilities, are any less “intelligent” than the playing programs they detect.)

Lured by the substantial prize money, several groups of strong grandmasters working with several computers at the same time entered the competition. At first, the results seemed predictable. The teams of human plus machine dominated even the strongest computers. The chess machine Hydra, which is a chess-specific supercomputer like Deep Blue, was no match for a strong human player using a relatively weak laptop. Human strategic guidance combined with the tactical acuity of a computer was overwhelming.

The surprise came at the conclusion of the event. The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time. Their skill at manipulating and “coaching” their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants. Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.

This result seems awfully similar to some of the other results I’ve reported on in the past.  Continue reading

Posted in Explanatory Search, Exploratory Search, Information Retrieval Foundations | 2 Comments

What You Can Find Out

The Edge has published their annual question for 2010:

HOW IS THE INTERNET CHANGING THE WAY YOU THINK?

As an Information Retrieval research scientist, I of course was quite interested in what search folks had to say.  I found this blurb from Marissa Mayer intriguing:

It’s not what you know, it’s what you can find out. The Internet has put at the forefront resourcefulness and critical-thinking and relegated memorization of rote facts to mental exercise or enjoyment. Because of the abundance of information and this new emphasis on resourcefulness, the Internet creates a sense that anything is knowable or findable — as long as you can construct the right search, find the right tool, or connect to the right people. The Internet empowers better decision-making and a more efficient use of time…

The Web has also enabled amazing dynamic visualizations, where an ideal presentation of information is constructed — a table of comparisons or a data-enhanced map, for example. These visualizations — be it news from around the world displayed on a globe or a sortable table of airfares — can greatly enhance our understanding of the world or our sense of opportunity. We can understand in an instant what would have taken months to create just a few short years ago. Yet, the Internet’s lack of structure means that it is not possible to construct these types of visualizations over any or all data. To achieve true automated, general understanding and visualization, we will need much better machine learning, entity extraction, and semantics capable of operating at vast scale.

It sounds like there is an increased awareness of (and respect for) Exploratory Search.  I’ve heard this via private channels, but this is the first time I’ve seen an acknowledgment of the need for more exploratory search from such an official channel.

I do want to point out, however, that in order to make this work at web scale, we won’t just need better automated methods.  I.e. we cannot rely solely on machine learning, entity extraction, or web-scale semantics.  Rather, what is also desperately needed is a way for the user him- or herself to inject personal semantics and structure into the search, visualization, and comparison process.  The search engine itself needs to be responsive to the structure that the user is giving to it, and rearrange itself around that information.

I am afraid that I am not being very clear in the vision that I’m attempting to lay out, so let me draw an analogy to parametric and non-parametric statistical modeling.  Continue reading

Posted in Exploratory Search, Information Retrieval Foundations | 6 Comments

Search versus Recommendation: Not The Only Tension

Greg Linden has an interesting post on Search on a domain like YouTube.  I reproduce it here because I would like to elaborate on it:

The article focuses on YouTube’s “plans to rely more heavily on personalization and ties between users to refine recommendations” and “suggesting videos that users may want to watch based on what they have watched before, or on what others with similar tastes have enjoyed.”  What is striking about this is how little this has to do with search. As described in the article, what YouTube needs to do is entertain people who are bored but do not entirely know what they want. YouTube wants to get from users spending “15 minutes a day on the site” closer to the “five hours in front of the television.” This is entertainment, not search. Passive discovery, playlists of content, deep classification hierarchies, well maintained catalogs, and recommendations of what to watch next will play a part; keyword search likely will play a lesser role.

My feeling is that the dichotomy that is being drawn does not exhaustively cover the space.  I would characterize the space using the following two orthogonal dimensions: (1) Information Need Clarity and (2) User Engagement.  The first dimension (clarity) is related to the degree with which the user understands his or her own information need, i.e. has something specific in mind that he is looking for and/or understands what he needs to do to find it.  That need may either be well understood, or (to borrow Nick Belkin’s terminology) “anomalous”: The user doesn’t know what he or she doesn’t know.  The second dimension is related to the level at which the user applies himself to the information seeking process.  That level may be active or passive.

Greg points out two modes: “Active Understood” (typical navigational web search) and “Passive Anomalous” (entertainment/discovery/recommendation).  But I believe that there are more than these two modes.  A large, interesting design space opens up when one realizes that information seeking can be “Active Anomalous” and “Passive Understood“.

dimensions

Exploratory Search is a good example of Active Anomalous seeking.  One doesn’t yet fully know or understand what it is that one is looking for, but at the same time one is willing to engage with an information system in order to discover what it is that he or she does not yet know.  And the system itself is designed not necessarily toward trying to answer a well understood need, but toward helping the user map out and better comprehend a space.

Collaborative Information Seeking (see here and here and here) is a good example of where an need may be well understood, but a user does not necessarily have to actively express every last query detail in order to get more information on a topic.  Why not?  Because when User #1 is explicitly collaborating with User #2, an algorithmic mediation engine can push some of User #2’s activity on to User #1 without requiring User #1 to make additional effort.  Note that I am not implying that every aspect of collaborative information seeking is passive; quite the contrary, as it requires at least one co-collaborator to be active.  I am only pointing out that it is a domain in which it becomes possible for a user to passively obtain specific information on a well understood need.

There is a lot discussion in the Information Retrieval Community on the similarities and differences between Search and Recommendation.  A fruitful tension opens up as one travels back and forth along the diagonal from Active Understood to Passive Anomalous; the two approaches often end up complementing each other.  Where I see much less discussion is on the tension that opens up along the other diagonal, between Passive Understood and Active Anomalous.  When Exploratory Search meets Collaborative Information Seeking, it yields Collaborative Exploratory Search and a whole host of interesting possibilities.  Over the coming year I will be blogging more about the tension along this alternative diagonal (both here at on the FXPAL blog) and what it means for the Information Retrieval systems I and others are designing.  Happy 2010!

Posted in Collaborative Information Seeking, Exploratory Search, Information Retrieval Foundations | 4 Comments

A Fragile Local Maximum for the Web

On Twitter today, Josh Young made an interesting observation to which I would like to call attention:

Ya, @jerepick, with “fauxpen” attached, google’s “nav. search as the top of the stack” is a fragile local maximum for the web.

This observation is a followup to the web-wide discussion that Google kicked off about the meaning of open.  Essentially, Rosenberg says that all of Google’s products at that are not at search layers of the stack should work toward being open, but that the search layer itself should be closed.  To protect it from spammers, you understand {cough}.

Earlier in the same post Rosenberg makes a distinction between open source  and open data, calling for increased openness in both.  However, when it comes to defending closed-search, this distinction gets lost.  But this distinction between open source vs. open data is important.  Here is how it translates to the search domain:

  • Open Source = Open search algorithm is about letting the world know what features are used to rank pages and how those features interrelate (are weighted)
  • Open Data = Open search results is about letting users refactor, remix, reuse, mashup, store and re-search locally any and all query results that the user issues.  And about letting the user use any software that they want to accomplish this — not just Google software

The excuse given about why Google cannot open up is that of spammers would be able to game the engine.  But if we look closely, we’ll see that it is an excuse that is primarily, if not exclusively, related to the “open source” aspect of openness.  Black hat SEO algorithmic gaming is not an issue when it comes to user results re-use and remixing.

And so the point (I think) Josh is making is that by closing not only the algorithm, but also the results of that algorithm, Google has effectively declared a moratorium on Internet application stack progress along that vertical.  Google is essentially saying to the Internet: Continue reading

Posted in Information Retrieval Foundations, Social Implications | Leave a comment

Google and the Meaning of Open

There is a fantastic Google blog post today by Jonathan Rosenberg on the meaning (and value) of openness.  Whooo-boy.. where do we start with this can of worms?  Guess I’ll jump right in.  Warning: This is probably the longest post I’ve written, so if you are easily bored, understand that this is not required reading.  It will not be on the test.

Here we go:

At Google we believe that open systems win. They lead to more innovation, value, and freedom of choice for consumers, and a vibrant, profitable, and competitive ecosystem for businesses.

Agreed!  I’m fully on board the spirit of this opening statement!

Many companies will claim roughly the same thing since they know that declaring themselves to be open is both good for their brand and completely without risk.

True.  So the question arises: What happens when being open carries with it an amount of risk?  Do you open up those areas of your business as well?  Or do you forever keep your most valuable layer of the stack closed and proprietary, both in terms of closed source as well as not-fully-open information?

We run the company and make our product decisions based on these principles, so I encourage you to carefully read, review, and debate them. Then own them and try to incorporate them into your work. This is a complex subject and if there is debate (and I’m sure there will be) it should be in the open! Please feel free to comment.

I like the spirit of this discussion so far.  I earnestly believe that Google is debating these things internally.  But I also take them at their word that they would like this debate to be in the open.  Consider this blog post part of my ongoing comment, and ongoing engagement in what I consider to be an extremely important area: The organization and dissemination of information. Continue reading

Posted in General, Information Retrieval Foundations, Social Implications | 5 Comments