All News

Libby adds AI recommendations and faces librarian pushback

Libby rolled out an AI feature called “Inspire Me” that turns prompts or saved titles into five tailored book suggestions drawn from each library’s digital collection. Some readers and librarians push back, citing a preference for non-AI recommendations and privacy worries. OverDrive says it limits personal data sharing and prioritizes immediately available titles.

Published August 26, 2025 at 03:11 PM EDT in Artificial Intelligence (AI)

Libby launches Inspire Me

Libby, the popular library e-book and audiobook app from OverDrive, introduced an AI-driven recommendation feature called "Inspire Me." Users tap the option, choose fiction or nonfiction, then narrow results by age range, tone (for example "spine-tingling" or "amusing"), and even specific scenarios like "time travelers rescue dragons from medieval knights."

The feature returns five titles that match the prompt and, importantly, pulls only from the local library’s digital collection, prioritizing items that are immediately available to borrow. It’s a straightforward application of AI to surface titles already curated by libraries.

Reader and librarian response

Not everyone welcomed the addition. On social media and in library circles some readers said they prefer human recommendations and discovery that doesn’t involve AI. Librarians worry about changing the relationship between curated collections and patrons, and many raised privacy concerns tied to AI-driven features.

OverDrive attempted to head off those concerns with a clear statement on data usage: the company says it avoids collecting "inessential personal information," doesn’t share user details with third parties or AI models, and that activity isn’t exposed to the model. If a patron shares a saved tag to get suggestions, only the titles are used — not device or personal metadata.

OverDrive’s chief marketing officer framed the tool as a complement to librarian-led discovery rather than a replacement. The feature was soft-launched earlier and is rolling out to all Libby users in September.

Why the pushback matters

The debate highlights two recurring themes in public-sector AI: trust in curated expertise and privacy around personalized services. Libraries are trusted institutions; any automation that appears to sideline staff judgment or obscure data use can erode that trust quickly.

Practical concerns for library leaders include maintaining transparency, ensuring opt-in controls, and measuring whether AI actually improves discovery versus adding noise.

Questions libraries should ask

  • What data is sent to the AI and can we limit it to the catalog only?
  • Is the feature opt-in and can patrons choose human-only recommendations?
  • How will success be measured — increased holds, reduced discovery time, or better circulation?

A path for responsible integration

Responsible rollout looks like clear labeling (AI-suggested), an opt-in default, and short pilot programs that report measurable outcomes. Catalog-first models that only reference available titles reduce privacy exposure and keep value aligned with library investments.

For libraries and civic tech teams, the tradeoff is simple: use AI to enhance discovery without eroding the human-centered trust that makes libraries essential. That requires transparency, data minimization, and measurement — not hype.

As Libby’s Inspire Me feature rolls out, the response will be a test case for how public institutions balance innovation with stewardship. Stakeholders should watch whether the feature drives more borrowing of local holdings and whether privacy protections hold up under scrutiny.

QuarkyByte’s approach to cases like this is pragmatic: assess data flows, run small experiments with clear KPIs, and design UX that keeps librarians in the loop. The goal is measurable benefit for readers without sacrificing institutional trust.

Keep Reading

View All
The Future of Business is AI

AI Tools Built for Agencies That Move Fast.

QuarkyByte helps libraries and civic tech teams evaluate AI recommendation features with privacy-first risk assessments, UX audits, and pilot metrics. We design transparent, human-centered integration plans that preserve librarian oversight while measuring real engagement gains. Contact us to model outcomes for your collection.