Interesting project and approach. I would like to host the captures locally though, otherwise I'd risk potentially losing years of captures and other useful information that is works on PageStash only if PS goes under.
Not a fan of subscriptions either, but I guess hosting and LLM analysis is a recurrent cost to PS as well.
I don't care much for the knowledge graph. Pricing-wise, I see it more like $5/m considering that a number of capturing solutions exist (Zotero, SingleFile, Save to Epub etc.).
Totally agree on the storage. Just added an export option. I was thinking about local storage that in terms of choosing an archive folder but not totally sure what would be intuitive e.g. if you added markup/comments/notes in platform then you might want that before saving (rather than at point of capture).
Yeah, I need to strike a balance with the hard costs. Appreciate the feedback! It's difficult to get something up and running at those numbers (those guys have scale) but hoping I can tailor towards any niche use-cases in the interim because the established players do not release features fast.
Hey HN! I'm a tinkerer who built PageStash after repeatedly losing research sources to link rot and dead links.
What it does:
• Full-page capture (screenshot + text + DOM)
• Client-side text extraction for search
• Knowledge graphs to map connections between your saved pages
• Works offline, Chrome/Firefox extensions
Early stage but functional. Built with Next.js, Supabase, and way too much time debugging browser capture edge cases.
Why I built it:
I kept losing articles mid-research. Bookmarks break, Archive.org takes days, Pocket strips formatting. I needed something that captures the COMPLETE page instantly and lets me see how sources connect.
The knowledge graph surprised me - seeing 50+ research sources laid out visually revealed patterns I'd never have spotted in folders. Not sure if it's genuinely useful or just looks cool. Would love honest feedback.
Technical challenges I'm still figuring out:
• Lazy-loaded images (scroll-and-wait helps but isn't perfect)
• Dynamic content timing (when to capture?)
• Large page handling (chunking uploads)
Free tier: 10 clips/month, no card needed
Pro: $12/mo for 1,000 clips
Three things I'd genuinely love feedback on:
1. Is the knowledge graph actually useful or just visual noise?
2. Should I prioritize video/PDF capture or better search?
3. Is $12/mo reasonable, too high, too low?
Happy to share technical details, discuss trade-offs, or hear about completely different approaches I should consider.
Interesting project and approach. I would like to host the captures locally though, otherwise I'd risk potentially losing years of captures and other useful information that is works on PageStash only if PS goes under.
Not a fan of subscriptions either, but I guess hosting and LLM analysis is a recurrent cost to PS as well.
I don't care much for the knowledge graph. Pricing-wise, I see it more like $5/m considering that a number of capturing solutions exist (Zotero, SingleFile, Save to Epub etc.).
Totally agree on the storage. Just added an export option. I was thinking about local storage that in terms of choosing an archive folder but not totally sure what would be intuitive e.g. if you added markup/comments/notes in platform then you might want that before saving (rather than at point of capture).
Yeah, I need to strike a balance with the hard costs. Appreciate the feedback! It's difficult to get something up and running at those numbers (those guys have scale) but hoping I can tailor towards any niche use-cases in the interim because the established players do not release features fast.
Hey HN! I'm a tinkerer who built PageStash after repeatedly losing research sources to link rot and dead links.
What it does: • Full-page capture (screenshot + text + DOM) • Client-side text extraction for search • Knowledge graphs to map connections between your saved pages • Works offline, Chrome/Firefox extensions
Early stage but functional. Built with Next.js, Supabase, and way too much time debugging browser capture edge cases.
Why I built it: I kept losing articles mid-research. Bookmarks break, Archive.org takes days, Pocket strips formatting. I needed something that captures the COMPLETE page instantly and lets me see how sources connect.
The knowledge graph surprised me - seeing 50+ research sources laid out visually revealed patterns I'd never have spotted in folders. Not sure if it's genuinely useful or just looks cool. Would love honest feedback.
Technical challenges I'm still figuring out: • Lazy-loaded images (scroll-and-wait helps but isn't perfect) • Dynamic content timing (when to capture?) • Large page handling (chunking uploads)
Free tier: 10 clips/month, no card needed Pro: $12/mo for 1,000 clips
Try it: https://www.pagestash.app
Three things I'd genuinely love feedback on: 1. Is the knowledge graph actually useful or just visual noise? 2. Should I prioritize video/PDF capture or better search? 3. Is $12/mo reasonable, too high, too low?
Happy to share technical details, discuss trade-offs, or hear about completely different approaches I should consider.
Longtime HN reader, HN submitter noob