"INP measures how quickly the site responds when users interact with it. Anything over 200ms feels slow to a user. We measure at the 75th percentile, so when we say INP is 250ms, it means 25% of user interactions feel slow. Our target is to get under 200ms at the 75th percentile, meaning 75%+ of interactions feel instant."

On March 12, 2024, Google replaced First Input Delay (FID) with Interaction to Next Paint (INP) as the official Core Web Vitals responsiveness metric. The change looks technical but the business impact is enormous: 64% of sites that passed Core Web Vitals before INP now fail. INP measures every interaction on the page, not just the first one, so JavaScript-heavy sites that used to slip past now fail visibly. This guide explains why your site failed, how to diagnose it in 15 minutes, and the four fixes that move INP fastest.

TL;DR

INP target is under 200ms at the 75th percentile of real users. Most failures come from heavy event handlers, third-party scripts blocking the main thread, and uncontrolled re-renders on click. Fix those three categories and you fix 90% of INP problems.

What changed between FID and INP

FID measured only the first interaction on a page, and only the input delay portion of that interaction. INP measures every click, tap, and keypress throughout the page lifecycle, and counts the full latency from interaction start to the next visual paint. Three structural differences matter:

  1. All interactions count, not just the first. A site with one fast first interaction and ten slow follow-up interactions used to pass FID and now fails INP.
  2. Full interaction latency, not just input delay. INP includes the time the browser spends running event handlers, recalculating styles, and painting the new frame.
  3. 75th percentile, not just average. Slower devices and slower networks weigh into the score.

How to diagnose INP in 15 minutes

Step 1: Get the field data from CrUX

Open PageSpeed Insights and run your homepage and top 5 traffic pages. Look at the “Discover what your real users are experiencing” section. The INP value shown there is your actual 75th percentile from real Chrome users in the last 28 days. If it is over 200ms, you fail. If it is over 500ms, you fail with priority.

Step 2: Find which interactions are slow in DevTools

Open Chrome DevTools, go to the Performance panel, click Record, interact with the page (click menu items, submit a form, scroll), then stop recording. Look at the “Interactions” track. Each interaction shows its total time. Anything over 200ms is a problem.

Step 3: Identify the bottleneck category

For each slow interaction, check which of these dominates:

  • Input delay (red): main thread was busy when the interaction fired, usually from a long task running
  • Processing time (yellow): your event handler itself is slow, usually from heavy JavaScript work
  • Presentation delay (purple): browser took too long to paint the next frame, usually from layout thrashing or large DOM updates
Code editor showing JavaScript event handler optimization
INP failures usually trace back to heavy event handlers running synchronously on the main thread.

The four fixes that move INP fastest

1. Defer or yield long tasks (biggest impact)

The single biggest INP killer is a long task running when the user interacts. Long tasks come from third-party scripts (analytics, chat widgets, ads), heavy React component trees, or your own event handlers doing too much synchronously. Fix: use scheduler.yield() (available in Chrome 129+) to break long tasks into chunks. As a fallback, use setTimeout(fn, 0) to defer non-critical work past the next paint.

2. Move third-party scripts to web workers or defer them entirely

Chat widgets (Intercom, Drift), analytics (full Hotjar/FullStory), and ad pixels are the most common INP villains. They run on every interaction, mostly invisibly. Use Partytown or similar to offload them to web workers. For chat widgets specifically, load them only after a user action that signals intent (scroll past hero, dwell over 30 seconds).

3. Optimize event handlers

Common patterns that wreck INP: synchronous DOM queries inside handlers, large state updates that trigger entire React tree re-renders, multiple sequential setState calls. Fix: batch state updates, use requestAnimationFrame for visual changes, memoize expensive computations.

4. Use CSS containment on interactive components

Adding contain: layout style to interactive elements (dropdowns, modals, accordions) tells the browser to skip layout calculations for the rest of the page when these elements change. This single CSS property can cut INP by 50-100ms on complex pages.

Realistic INP improvement timelines

Here is what to expect once you ship fixes:

Average INP improvement after each fix categoryDefer long tasks-180msOffload 3rd party-120msOptimize handlers-70msCSS containment-40msMedian improvement across 23 RankSages CWV engagements, 2025-2026

Why this is urgent in 2026

Google confirmed in February 2026 that INP is now a confirmed ranking factor for both desktop and mobile searches. Pages that fail INP are deprioritized in competitive SERPs even when their content quality is higher than the winning result. For high-value commercial queries, this single metric can decide whether you rank in positions 1-3 or positions 5-10.

Top INP-killing event handlers, ranked by frequencyClick handler chains (React)38Form input onChange27Scroll listeners (throttled)18Hover effects + animations12Third-party widget callbacks8
Frequency across 23 RankSages CWV engagements, June 2024 to April 2026

The two-track INP fix workflow we use on every engagement

Every INP project we run follows the same diagnostic pattern. The two tracks run in parallel because they typically reveal different root causes. The combined output is a prioritized fix queue.

Track 1: Real-user data investigation (CrUX field data)

Open PageSpeed Insights and run each high-traffic URL. Note the INP value, the percentage of slow interactions, and the page-specific patterns. The CrUX data is your ground truth because it reflects what actual users on actual devices experience. Lab data is useful for confirmation but never sufficient by itself.

Key signals to extract from CrUX:

  • INP value at 75th percentile: target under 200ms
  • Percentage of slow interactions: ideally under 10% of total interactions
  • Worst pages by INP: focus fix effort here first
  • Device segment splits: mobile failures are far more common than desktop

Track 2: DevTools interaction profiling

Open Chrome DevTools, switch to the Performance panel, enable CPU throttling at 4x, then record a typical user flow on each priority page. The interaction track shows you exactly which event handlers, layouts, and paints take the most time.

For each slow interaction (anything over 200ms total latency), categorize:

  • Input delay (red bars): main thread was busy when the user clicked or typed. Diagnosis: find the long task running in that window.
  • Processing time (yellow bars): your event handler itself is slow. Diagnosis: profile the function execution, look for unnecessary synchronous DOM work.
  • Presentation delay (purple bars): browser couldn’t paint the next frame fast enough. Diagnosis: large DOM updates, layout thrashing, or expensive CSS.
INP fix prioritization funnelIdentified slow interactions · 100%Categorized by cause · 85%Filtered by impact (75th percentile) · 60%Shipped fixes · 35%Verified in CrUX (28 days) · 28%
Conversion from finding INP issues to confirmed CrUX improvements

Six concrete code patterns that fix INP

The following patterns have produced the biggest INP improvements across our client portfolio. Each is paired with the exact code change that worked.

1. Use scheduler.yield() to break long handlers

Available in Chrome 129+. Wrap long event handlers with yield points so the browser can paint between chunks of your work.

async function handleClick() { await doFirstChunk(); await scheduler.yield(); await doSecondChunk(); }

For older browsers, the fallback is await new Promise(r => setTimeout(r, 0)) which has the same yield effect with slightly different scheduling priority.

2. Defer non-critical state updates

In React, use startTransition() to mark updates as non-urgent. The user-visible feedback (button color change, loading spinner) renders immediately while the heavy state update happens after the next paint.

3. Memoize expensive computations inside handlers

Any function call inside an event handler that does more than 16ms of work is a candidate for memoization. useMemo in React, computed in Vue, signal in modern frameworks. Even simpler: cache the result in a module-level variable if the input has limited variations.

4. Move third-party scripts to web workers

Chat widgets, analytics, and ad pixels are the most common INP villains because they run on every interaction. Use Partytown or similar libraries to offload them to web workers. Lower-effort fallback: load chat widgets only after a user-intent signal (scroll past 50% of page, dwell over 30 seconds, click anywhere).

5. Add CSS containment to interactive components

Adding contain: layout style to dropdowns, modals, and accordion bodies tells the browser it can skip recalculating layout for the rest of the page when these elements change. Single property, 50-100ms typical INP reduction on complex pages.

6. Reduce DOM size in critical interactive zones

When INP is dominated by presentation delay, the most common root cause is a DOM tree that is too large. Strategies that work: virtualize long lists (react-window, vue-virtual-scroller), lazy-mount complex sections, paginate or filter aggressive defaults.

Five common mistakes that block INP fixes

Across 23 RankSages CWV engagements we have run since 2024, the same five mistakes show up repeatedly. Each one wastes 1-2 weeks of work before teams realize the diagnosis was wrong.

Mistake 1: Optimizing lab data instead of field data

Lighthouse INP scores are simulated. Your actual users on actual devices produce different numbers. Always start with the CrUX field data in PageSpeed Insights, not the Lighthouse lab score. Fixes that move lab data sometimes do not move field data because the lab does not replicate real-world JavaScript execution patterns.

Mistake 2: Treating INP like FID

FID measured first input only. INP measures every interaction throughout the page lifecycle, and counts the full latency from interaction start to next paint. A site can have a perfect FID and still fail INP if the 50th interaction on the page is slow. Diagnostics from 2023 that focused on first-input optimization will miss the real issues.

Mistake 3: Throttling fixes without re-measuring

Once you ship a fix, the temptation is to immediately ship more. Resist. Wait 28 days for CrUX field data to reflect the change. Sometimes a fix that looked promising in lab data has zero or negative effect in real-user data because the lab missed a key device segment.

Mistake 4: Ignoring third-party scripts

Most teams treat third-party scripts as outside their control. They are not. Chat widgets, analytics, and ad pixels are the most common INP villains because they fire on every interaction. The fix is to defer them, sandbox them in web workers (via Partytown), or gate their loading behind user-intent signals.

Mistake 5: Fixing the wrong devices

INP failures are heavily weighted toward mid-range and low-end mobile devices. Optimizing on a high-end iPhone or fast laptop will make local testing look great while production stays broken. Always test with Chrome DevTools CPU throttling set to “4x slowdown” minimum.

Case study: B2B SaaS dashboard, INP from 480ms to 180ms in 5 weeks

A B2B SaaS client engaged RankSages in October 2025 with their dashboard scoring 480ms INP at the 75th percentile (failing). The dashboard had a React-based data grid that re-rendered on every cell click. After diagnostic and a 5-week fix sequence:

  • Week 1: Profiled in DevTools. Identified that grid cell clicks triggered full grid re-render, taking 280ms processing time on a mid-range Android.
  • Week 2: Implemented React.memo on cells + virtualized rows past viewport. Lab INP dropped to 220ms.
  • Week 3: Deferred Intercom chat widget loading until first user interaction. Removed two abandoned analytics pixels.
  • Week 4: Added scheduler.yield() to a heavy filter operation that was running synchronously on filter change.
  • Week 5: Added CSS containment to modals. Verified all fixes in DevTools.
  • Week 9 (CrUX reflection): Field INP at 75th percentile dropped to 180ms. Dashboard now passing Core Web Vitals across desktop and mobile.

The biggest single contributor was the React virtualization in week 2, which alone moved INP from 480ms to 220ms. The remaining fixes refined the result.

Browser-specific INP behavior you need to know

INP measurement differs subtly across browsers, and that difference affects how you diagnose and fix issues. Chrome reports INP through CrUX with the most coverage. Safari and Firefox report it differently or not at all in some contexts.

Chrome and Edge (Chromium)

The reference implementation. Reports INP to the CrUX dataset that feeds PageSpeed Insights and Google Search Console. Includes input delay, processing time, and presentation delay in the measurement. CPU throttling in DevTools faithfully simulates lower-end Android devices, which is where most real-world INP failures happen.

Safari (iOS and macOS)

Does not report to CrUX, so Safari users do not appear in your INP field data. However, Safari has different scheduling behavior on iOS that can produce INP regressions invisible in your Chrome metrics. If iOS traffic is a significant share of your audience, complement CrUX with a real user monitoring (RUM) tool that captures Safari sessions.

Firefox

Reports limited interaction timing data. Not currently a Core Web Vitals signal source. Important to test in but not to optimize against the same metric thresholds Google uses for Chrome.

Monitoring INP in production: the RUM toolkit

CrUX is rolling 28-day aggregate data. For active development, you need real-time INP monitoring per page and per user segment. The tools we typically deploy on client sites:

  • web-vitals.js library from Google: free, open-source, captures INP from real users and sends to your analytics endpoint. Adds about 2KB gzipped.
  • Cloudflare Web Analytics: free tier captures CWV including INP. Works at the edge so no client-side performance overhead.
  • SpeedCurve: paid commercial RUM with deep INP diagnostics including which specific interactions are slow. Worth it for sites with significant traffic.
  • Sentry Performance: paid commercial tool with INP plus interaction tracing. Useful if you already use Sentry for error tracking.

Wire any of these into your analytics so you can set alerts on INP regressions before they show up in CrUX (which takes 28 days to reflect changes). Catching a regression in the first week of deployment saves a full CrUX cycle of waiting.

How to communicate INP results to non-technical stakeholders

INP is a technical metric most stakeholders will not intuitively understand. Translating it to business language is part of the job. The simplest framing that works in our client reports:

“INP measures how quickly the site responds when users interact with it. Anything over 200ms feels slow to a user. We measure at the 75th percentile, so when we say INP is 250ms, it means 25% of user interactions feel slow. Our target is to get under 200ms at the 75th percentile, meaning 75%+ of interactions feel instant.”

Pair the metric with a business outcome. INP under 200ms correlates with measurable conversion rate improvements on commercial pages because frustrated users abandon faster. The specific conversion lift varies by industry but typically lands in the 3-8% range when INP drops from over 300ms to under 200ms on critical pages.

FAQ

Is INP only a mobile metric?

No. INP applies to both mobile and desktop. The 200ms threshold is the same. However, mobile devices fail more often because they have slower CPUs and more aggressive throttling.

How long does an INP fix take to show in Search Console?

CrUX field data is a 28-day rolling average. Even if you ship fixes today, the visible improvement in PageSpeed Insights and Search Console takes 28 days to fully reflect the change. Use lab data (Lighthouse) to confirm the fix immediately, then watch CrUX over the next month for confirmation.

Does INP affect AI Overview citations?

Indirectly. Sites with poor Core Web Vitals tend to have weaker overall quality signals, which reduces AI Overview citation rates. Fixing INP alone does not guarantee AI citation, but failing INP correlates with lower citation rates across our client portfolio.

Frequently asked questions

Is INP only a mobile metric?

No. INP applies to both mobile and desktop. The 200ms threshold is the same. However, mobile devices fail more often because they have slower CPUs and more aggressive throttling.

How long does an INP fix take to show in Search Console?

CrUX field data is a 28-day rolling average. Even if you ship fixes today, the visible improvement in PageSpeed Insights and Search Console takes 28 days to fully reflect the change. Use lab data (Lighthouse) to confirm the fix immediately, then watch CrUX over the next month for confirmation.

Does INP affect AI Overview citations?

Indirectly. Sites with poor Core Web Vitals tend to have weaker overall quality signals, which reduces AI Overview citation rates. Fixing INP alone does not guarantee AI citation, but failing INP correlates with lower citation rates across our client portfolio.