Recently, I’ve found the endless stream approach particularly bad if dealing with anything chronological. If I want to go to the very beginning of something and I have to scroll through umpteen thousand pages, it becomes a TERRIBLE user experience. If you’re going “endless river” you should also give the user to “skip to record 5,678”.
“In a perfect world, every search would result in a page with a single item: exactly the thing you were looking for.”
I suspect that, early on, Google had the same idea, and the I’m Feeling Lucky button was their attempt at realizing it. Of course, that wasn’t quite how things panned out.
Back in October 2010, I gave a talk on the NOSQL database, Riak. Something I mentioned in my talk is very similar to what you are saying here. Particularly that no one cares to paginate through thousands (or far more) results. Beyond that, I was looking at it from a database systems perspective. Certain distributed, non-transactional, non-relational systems, in certain cases, have difficulties keeping absolute counts of items. My insight is that the larger the result set, the less people care about pagination.
I couldn’t agree mroe with Christopher Allen-Poole. Whenever I see this, I often want to get to the “bottom” (or often “first”) entry. I can’t just press END any more, I have to press it over and over and over again.
If the devs could capture the keypress of the END key and just load ALL the remaining content, that would be great.
My company uses a similar method where the total number of items (“Results”) are known we output page “placeholders” when a users stops scrolling at one of these placeholders we go and get that page worth of results and replace the pageholder.
We know ahead of time how many results per page and we have a strict size per result so the placeholders have a height set at the same height the results would be (approx).
This fixes the scroll bar issue and only loads up what the user wants to see.
Seems like there are a couple ‘complaints’ about infinite scrolling that really have nothing to do with the actual scrolling function.
- “What if I want to jump to a specific…”
I use Amazon and NewEgg all the time. When you sort by price, Lowest to Highest, do you actually jump to page 7 to see what “mid range” prices are? No. You use the price range facet on the left to narrow to what you’re really after.
The fact that you can’t jump to a specific place when doing infinite scrolling means that the designer/developer didn’t think about what her users really want. You can have this problem with a paginated interface.
- “I hate how I can’t get to the bottom of the page”
Actually, I hate this about Facebook’s news feed too. This isn’t a scrolling issue, it’s a UX issue. The manual “show more” link is a mitigation but the solution is to not have to force people to go to the bottom of the page to do something useful (ask why would you want to in the first place).
- “It will lag the browser”
This is more of an implementation issue and is the one thing I wonder about for these “infinite” scrolling solutions. I totally agree if I scrolled for the equivalent of 20 pages, won’t my browser eventually grid to a halt with all the memory? I think the solution of popping off results near the top is a good approach, but technically this seems pretty difficult; also what happens to the scroll bar in that case?
- How do I figure out where I am?
This is context-specific, I feel. If you are sorting alphabetically, you should provide a quick way to jump to specific letters (like a dictionary). If you are sorting by date, a timeline-esque jumper works just as well. If you are sorting by price, a range works well.
I am in the midst of redesigning one of my sites. I want to implement “infinite” scrolling because it is browsing a collection of items I (the user) own. What I am worried about is what do I do with previous results? The last thing I want is to hang the browser. I like the idea of removing previous items, but my worry is how the user can “immediately” jump back to the beginning of the list when you’ve removed them from the DOM and effectively need to load/request them again. I think a “virtual” scrollbar would work, if done correctly, one that knew the context of its environment (how many items, where you were, an ‘index’ of the content to jump to specific points). Sorting alphabetically? Pick a letter on the ‘scrollbar’. Sorting chronologically? Pick a point in time to jump to. I would be interested in doing a demo of how this would work.
We’ve made an attempt at solving some of the friction and pain with pagination in forums by having “in line” pagination in our forum. Take for example this forum post which has an unreasonably large 1,510 replies (at the time of writing this):
In the URL the -1 signifies the “last page.” The algorithm shows the first post, a bunch of links to other pages, and then then last 1 to 10 posts depending on how many posts are on the last page.
When a user clicks on a page number then an AJAX call is made which pulls that page into the page you are viewing and splits up the remaining page links.
I agree with Jeff that forums are fundamentally broken, however, I feel that given the traditional pagination model in forums and this one that we’ve made the web a better place.
Granted that in the example above with a forum topic with 150 pages this is less useful. However, most of the forum topics are between 1 and 3 pages so pulling in the other 2 pages into a single page that you can scroll up and down through makes this much more usable.
Concur. Pagination hasn’t made sense since Ajax became mainstream and sane people realized that “the page fold” doesn’t exist.
This is also a testament to the fact that… [cont’d]
<-Prev [Comment page 1 of 35] Next->
I’m all in favour of the idea of endless scrolling. You search engine should be smart enough to let get your wanted result on the first 10 items. The rest becomes exponentially unimportant. I see this in all sorts of apps, but at my dayjob I’m developing enterprise solution apps.
What I noticed is that there are no plans or attempts to go to endless scrolling, while this could defenitly be a added value for the user experience.
Do you see this happening in an enterprise environment or perhaps we should just implement the search in an other way?
This post describes how I solved problems with pagination: http://programmerstrouble.blogspot.com/2011/03/design-pattern-pagination-with-useful.html
I added a hovering hint that shows you what exactly is on that page (ie, price from-to if you are currently sorting on the price field).
I posted this on HackerNews once, but it ended up on page 2 so nobody really noticed
I agree with everything you’re saying except for this part:
In a perfect world, every search would result in a page with a single item: exactly the thing you were looking for.
If I search for “Ethiopian restaurants in my city”, ideally I’d see all three restaurants, with photos of the interior, a relevant subsample of the menu items/prices, and some information about the service/quality/reviews.
If I search for “buy car”, I would like to see an array of options, like a matrix which guides me through the relevant tradeoffs (car specs, lease terms) and the $ cost of moving around in that tradeoff space.
Even a “perfect” search engine isn’t going to know (although it could presume/make assumptions about) what car I’m going to want, when I don’t even know yet. Rather than trying to creepily prognosticate about what my answer’s going to be, an ideal search engine would just lead me to the next logical question, along with the relevant info to answer it.
I absolutely hate the dynamic loading design. Ever since slashdot switched to it, I’ve been using their /archive.pl page.
It’s a nice idea in theory, but in practice it doesn’t work. I have less friction when using a paginated page, because I always open the next 5 pages as new tabs so that they can load in the background. I can’t do this with the dynamic design - I have to wait a second for each page to load. It’s also a pain if I just want to skim the results, due to the breaks between pages.
give the user a means to control either the initial number/proportion of items loaded, so that someone who is going to read the entire thing can just set it to 100% and have it load for them
Actually have the pages load seamlessly by caching them beforehand. The browser shouldn’t wait until you reach the bottom of the page to start downloading the next one; it should start the moment the first page is done, and download the 3rd the moment it starts to display the 2nd page.
Or better yet, download the first 10 pages with the first, but don’t display them until the user scrolls to them.
There are two problems here: downloading all that data takes a while, and displaying all that data makes it unmanageable. Trying to deal with both at the same time only confuses the issue and neglects part of it.
Sometimes, the are no “relevant” items in a search, but the whole set of items is the only relevant unit. For example, in a lexicon or dictionary search, you may want to get a list of, say, all the words that start with “da” and end in “n” shorter than 10 characters, something like:
“lemma:dan length:[ TO 10]”
When every item of a result set is relevant, what kind of pagination would you use?
I think Alex Micek’s progressively-enhanced infinite scroll is the most impressive - doesn’t break the back button: http://tumbledry.org/2011/05/12/screw_hashbangs_building
Endless pagination should not break deep linking.
I’m glad you mentioned this. I’d take badly implemented traditional pagination over badly implemented infinite scrolling any time.
The only people who don’t like endless scrolling is the marketing people, because you get less pageviews.
Endless scrolling strains browser resources, and when the browser finally crashes and releases its memory, it has no record of the position you were in. Pagination doesn’t cost much on the server side (unless you try to keep an accurate count at all times, but that’s rarely important). Pushing the work to the client may be seductive because it’s the newer technique, but it doesn’t even provide increased responsiveness because web developers tend to completely disregard the costs of client-side inefficiency (particularly for memory, which sees a sort of tragedy of the commons).
With such endless pagination one could also provide a better real-time filtering.
By letting the user hide/remove those results that don’t have nothing to do with he’s search similar non-relevant results can be let fade away while scrolling. (And an half-second transition would let the user the ability to check that such fading results weren’t really helpful.)
When the dataset to be displayed is something unsortable “like default search results of Google”, I strongly agree with endless scrolling (actually, had created a tutorial about it few years ago: http://www.webresourcesdepot.com/load-content-while-scrolling-with-jquery/ , simply I’m a fan of it).
But, if the dataset is something more like a list whose fields can be sorted (like a list of users, cars with their models-milage-seller’s city, etc), than pagination makes more sense as you can guess what may be listed in page 964.
It even gets better if hovering a page number in a pagination informs you about the records in that page (hovering page 968 can say: “Netherlands-Nigeria” if the records are listed by country) so you paginate without guessing.
They(Google) are doing some experiments with pagination in the gmail blog. http://gmailblog.blogspot.com.br/
Nothing great, but is already something.