Open any React app with a bottom sheet. Drag it halfway down. Let go.
It jumps. Or it slides mechanically to wherever it decides you wanted. It does not feel like you are touching something real. It feels like you are clicking a button that happens to animate.
I got frustrated enough that I built my own.
The starting point
There is basically one real option in the React ecosystem: vaul. The README now says it is unmaintained. It also depends on @radix-ui/react-dialog, which is fine but means you are inheriting Radix's abstractions whether you want them or not. I wanted to see if I could build something with zero dependencies, just React and raw pointer events, and have it still feel good.
That turned out to be harder than I expected. And at some point the experiment stopped feeling like an experiment.
Zero dependencies means you own the physics
Most gesture libraries abstract this away. You get a useDrag hook and it hands you velocity and delta. When you write it yourself you have to actually understand what velocity means in the context of a pointer event stream.
Pointer events give you a position and a timestamp. Velocity is just position delta over time delta:
const dt = event.timeStamp - lastTimestamp
const rawVx = dt > 0 ? (event.clientX - lastX) / dt : 0The problem is raw velocity is noisy. A slightly shaky hand produces samples that swing between 0.1 and 0.8 px/ms on what should be a smooth drag. If you use the raw value at pointer up to decide whether to close the drawer, it is basically random.
The fix is exponential smoothing:
const vx = smoothingFactor * rawVx + (1 - smoothingFactor) * state.velocityXsmoothingFactor is 0.3. Each new sample is 30% of the output, historical average is 70%. I tested this on actual phones and 0.3 is where jitter disappears but the velocity still responds fast enough to feel accurate.
At pointer up, the close decision uses both smoothed velocity and total average velocity, taking whichever is higher. Short flicks would get underestimated by smoothing alone since it biases toward recent frames where the finger is decelerating:
const avgVelocityPxMs = totalTime > 0 ? totalDelta / totalTime : 0
const velocityPxMs = Math.abs(smoothedVelocityPxMs) > avgVelocityPxMs
? smoothedVelocityPxMs
: avgVelocityPxMs * (smoothedVelocityPxMs >= 0 ? 1 : -1)
if (velocityPxMs > 2 && isDraggingTowardClose) {
shouldClose = true
} else if (velocityPxMs < -2) {
shouldClose = false
targetSnapIndex = snapPoints[snapPoints.length - 1]?.index ?? 0
} else if (Math.abs(velocityPxMs) > 0.4 && Math.abs(dragDelta) < opts.maxTranslate * 0.4) {
const nextIndex = opts.activeSnapIndex + (isDraggingTowardClose ? -1 : 1)
shouldClose = nextIndex < 0
} else {
shouldClose = translateValue > closeThresholdPx
}interactive demo
slow drag vs quick flick
The axis lock problem
A drawer slides on one axis. But a finger moves in two dimensions. If you start dragging the drawer and drift slightly sideways, the drawer should not drift too.
The fix: wait until you know which axis the user actually intends.
if (dominantAxis === null) {
const absX = Math.abs(deltaX)
const absY = Math.abs(deltaY)
if (Math.max(absX, absY) >= axisLockThreshold) {
dominantAxis = absX > absY ? 'x' : 'y'
}
}After 8px of movement, lock to whichever axis moved more. If it is the wrong axis for this drawer, cancel the gesture entirely.
Scroll conflict
This is the genuinely hard one.
If the drawer has scrollable content inside it, how do you know whether the user is trying to scroll the content or drag the drawer? You cannot just check which element they touched because the touch target is inside the scrollable area either way.
The answer is to check scroll position at drag start:
export function shouldDrag(target: EventTarget | null, direction: Direction): boolean {
if (!(target instanceof Element)) return true
const horizontal = isHorizontal(direction)
const scrollParent = getScrollParent(target, horizontal)
if (!scrollParent) return true
const { scrollTop, scrollHeight, clientHeight } = scrollParent
return direction === 'bottom' ? scrollTop <= 0 : scrollTop + clientHeight >= scrollHeight - 1
}For a bottom drawer, dragging is only allowed when the scroll container is at the very top. Once the user scrolls the content down, drag does nothing until they scroll back. This is how native bottom sheets work on iOS.
The off-by-one on scrollHeight - 1 is intentional. Some browsers report fractional scroll positions and rounding means you are never exactly at the bottom.
Rubber band
When you drag past the open or closed boundary, the drawer should resist instead of stop. I used logarithmic dampening:
export function rubberBand(value: number, min: number, max: number, factor = 0.55): number {
if (value >= min && value <= max) return value
const distance = value < min ? min - value : value - max
const sign = value < min ? -1 : 1
const dampened = factor * Math.log(1 + distance)
return (value < min ? min : max) + sign * dampened
}Math.log gives you a curve that is very responsive at small distances and increasingly resistant at large ones. Linear resistance feels like a wall. This feels elastic. Factor 0.55 came from testing. Anything lower felt too stiff, anything higher felt too loose.
Snap points and coordinate spaces
Snap points are positions the drawer can rest at. The format is flexible:
<Drawer.Root snapPoints={[200, '50%', 'content']}>'content' resolves to the drawer's measured height via ResizeObserver, clamped to the viewport. Percentages are fractions of viewport height. Numbers are absolute pixels.
When you release, the engine projects where the drawer would land based on velocity:
export function decayPosition(position: number, velocity: number, deceleration = 0.003): number {
if (Math.abs(velocity) < 0.01) return position
return position + (velocity * Math.abs(velocity)) / (2 * deceleration)
}Then finds the nearest snap point to that projected position. A fast release projects far and skips over intermediate points. A slow release stays close to where it already is.
snap points demo
slow drag snaps to nearest, fast fling skips
The bug that made everything "work", incorrectly
This one sat in production for a while and is worth telling properly.
Velocity from the pointer tracker is in px/ms. At pointer up, I multiplied by 1000 to get px/s for a threshold comparison:
const velocityPxMs = getVelocity() * 1000 // actually px/s, badly namedThen I also passed this value to findNearestSnapPoint, which internally called decayPosition with a deceleration constant calibrated for px/ms. With px/s values going in (say 300 instead of 0.3), the projected landing position was in the millions of pixels.
Here is the thing: it still returned a valid snap index. Because the projection was so astronomically far past every snap point, it just picked the nearest one by default. The function was technically doing the right thing through completely wrong math.
It worked. Visually close enough that I did not notice until a fast flick felt slightly off on snap selection. I was not even looking for a unit bug. I found it by adding a console.log to check why a specific flick was not landing where I expected.
The fix was trivial: keep velocity in px/ms throughout the engine, convert only when needed. But the diagnosis took hours because the symptom was so subtle.
No re-renders during drag
The entire drag loop runs without touching React state. Translate is stored in a ref, applied directly to the DOM, and only committed to state when the gesture ends:
onDrag: (tv) => {
applyTranslate(tv)
const mt = maxTranslateRef.current
const progress = mt > 0 ? Math.max(0, 1 - tv / mt) : 0
applyProgress(progress)
}applyProgress sets a CSS custom property on the overlay element which it reads for its opacity. No React, no diffing, just a number changing on an element.
At 120fps on a modern phone, you cannot afford a React render cycle per frame if you want the gesture to feel like the finger is actually attached to something.
From experiment to package
There was a specific moment where this stopped feeling like a side project. It was when I used it in an actual product and realized I was not fighting it. The API felt like the right shape. The gesture felt real.
The only thing left was making it releasable: proper ARIA semantics, focus trapping, scroll locking, keyboard dismissal, six layout variants, four directions. The kind of work that is not interesting to write about but is the difference between a demo and a library.
I shipped it as hiraki. Zero dependencies. The things I wanted when I started.
It is open source and still early. Some edge cases in the scroll conflict logic I know about and have not fixed yet. The animation timing system has a few inconsistencies I want to clean up. But the gesture layer is solid and I am using it in production.
Source on GitHub. Currently at 0.0.7.