Published on

Pagination Killing Performance — Why OFFSET Gets Slower as Pages Increase

Authors

Introduction

OFFSET pagination feels intuitive: to get page 5 with 20 results per page, skip 80 rows and take 20. But "skip 80 rows" in SQL means reading 100 rows, discarding the first 80, and returning the last 20. At page 500 that's reading 10,000 rows to return 20. At page 5,000 it's reading 100,000 rows. Your most patient users (who paginate deep) get the worst performance.

The OFFSET Problem

-- OFFSET pagination — works fine for small offsets
SELECT * FROM posts ORDER BY created_at DESC LIMIT 20 OFFSET 0;    -- page 1: fast
SELECT * FROM posts ORDER BY created_at DESC LIMIT 20 OFFSET 20;   -- page 2: fast
SELECT * FROM posts ORDER BY created_at DESC LIMIT 20 OFFSET 100;  -- page 6: ok

-- But:
SELECT * FROM posts ORDER BY created_at DESC LIMIT 20 OFFSET 10000;  -- page 501
-- PostgreSQL reads 10,020 rows from disk, sorts them, discards first 10,000
-- Time: ~500ms on 1M rows, ~5s on 10M rows

-- EXPLAIN ANALYZE shows the problem:
EXPLAIN ANALYZE SELECT * FROM posts ORDER BY created_at DESC LIMIT 20 OFFSET 10000;
-- → Rows Removed by Offset: 10000
-- → actual time: 487ms

Fix: Cursor-Based Pagination (Keyset Pagination)

Instead of "skip N rows," cursor pagination asks "give me rows after this specific row":

// ❌ OFFSET pagination — degrades with depth
async function getPostsOffset(page: number, limit = 20) {
  const offset = (page - 1) * limit
  return db.query(`
    SELECT * FROM posts
    ORDER BY created_at DESC
    LIMIT $1 OFFSET $2
  `, [limit, offset])
}

// ✅ Cursor pagination — constant time regardless of depth
async function getPostsCursor(cursor: string | null, limit = 20) {
  if (cursor) {
    // Decode cursor: { created_at, id } from previous last item
    const { created_at, id } = decodeCursor(cursor)

    return db.query(`
      SELECT * FROM posts
      WHERE (created_at, id) < ($1, $2)
      ORDER BY created_at DESC, id DESC
      LIMIT $3
    `, [created_at, id, limit + 1])  // fetch one extra to detect hasNextPage
  } else {
    return db.query(`
      SELECT * FROM posts
      ORDER BY created_at DESC, id DESC
      LIMIT $1
    `, [limit + 1])
  }
}

function encodeCursor(row: { created_at: Date; id: string }): string {
  return Buffer.from(JSON.stringify({
    created_at: row.created_at.toISOString(),
    id: row.id,
  })).toString('base64url')
}

function decodeCursor(cursor: string): { created_at: string; id: string } {
  return JSON.parse(Buffer.from(cursor, 'base64url').toString())
}
// Full endpoint
app.get('/posts', async (req, res) => {
  const limit = 20
  const cursor = req.query.cursor as string | undefined

  const result = await getPostsCursor(cursor ?? null, limit)
  const rows = result.rows

  const hasNextPage = rows.length > limit
  const items = hasNextPage ? rows.slice(0, limit) : rows
  const nextCursor = hasNextPage ? encodeCursor(items[items.length - 1]) : null

  res.json({
    items,
    pagination: {
      nextCursor,
      hasNextPage,
    },
  })
})

Why Cursor Pagination Is Fast

-- Cursor query — uses index efficiently
SELECT * FROM posts
WHERE (created_at, id) < ('2026-03-10 12:00:00', 'abc-def')
ORDER BY created_at DESC, id DESC
LIMIT 20;

-- EXPLAIN shows:
-- Index Scan Backward using posts_created_at_id_idx on posts
-- Index Cond: ((created_at, id) < ...)
-- Rows Removed by Filter: 0  ← No rows wasted!
-- actual time: 0.4ms  ← Same as page 1, regardless of depth

The composite index (created_at DESC, id DESC) lets PostgreSQL seek directly to the cursor position — no rows wasted.

Required Index

-- Cursor pagination requires a composite index on the sort columns
CREATE INDEX ON posts (created_at DESC, id DESC);

-- For multi-tenant: include tenant first
CREATE INDEX ON posts (tenant_id, created_at DESC, id DESC);

-- Query pattern with tenant:
SELECT * FROM posts
WHERE tenant_id = $1
  AND (created_at, id) < ($2, $3)
ORDER BY created_at DESC, id DESC
LIMIT 20;

When OFFSET Is Still OK

Cursor pagination has trade-offs — you can't jump to page 50 directly, and it's harder to implement. OFFSET is acceptable when:

// ✅ OFFSET is fine for small datasets (< 10,000 rows total)
// ✅ OFFSET is fine for admin interfaces where users rarely go past page 10
// ✅ OFFSET is fine for search results (users rarely go past page 5)

// For most user-facing infinite scroll or "load more" features:
// → Use cursor pagination

// If you need "jump to page N" (like a file explorer):
// → Hybrid: cursor for display, count query for total pages
// → Or: accept that page 1000 is slow (most users don't go there)

Total Count Without OFFSET

// ❌ COUNT(*) on large filtered tables can be slow
const countResult = await db.query('SELECT COUNT(*) FROM posts WHERE user_id = $1', [userId])
const total = parseInt(countResult.rows[0].count)

// ✅ For approximate total (good enough for "~5,000 results"):
const approxResult = await db.query(`
  SELECT reltuples::bigint AS estimate
  FROM pg_class
  WHERE relname = 'posts'
`)

// ✅ Cached count: increment/decrement Redis counter on write
await redis.incr(`user:${userId}:post_count`)   // on create
await redis.decr(`user:${userId}:post_count`)   // on delete
const count = parseInt(await redis.get(`user:${userId}:post_count`) ?? '0')

Pagination Checklist

  • ✅ Use cursor pagination for any list that might have > 100 pages
  • ✅ Create composite index on (sort_column DESC, id DESC)
  • ✅ Include tenant/user filter in index if queries are scoped
  • ✅ Fetch limit + 1 rows to detect hasNextPage without a count query
  • ✅ Encode cursors as opaque tokens (base64) — don't expose sort values directly
  • ✅ Only use OFFSET for small datasets or admin UIs
  • ✅ Avoid SELECT COUNT(*) on large tables in the hot path

Conclusion

OFFSET pagination has a performance cliff: the deeper the page, the slower the query. Cursor pagination doesn't have this cliff — page 10,000 is as fast as page 1. The implementation is a bit more complex (you need to encode/decode cursor tokens, and you can't jump to arbitrary pages), but for any user-facing infinite scroll, feed, or "load more" interface, cursor pagination is the right choice. Add the composite index, encode the cursor as an opaque token, and your pagination performance stays flat at any scale.