Discover Unseen Literature

The smartest way to find, prioritize, read, and cite research — powered by AI and matched to your context.

Stanford University
University of Michigan
University of Chicago
New Jersey Institute of Technology
University of Illinois Chicago
University of Illinois Urbana-Champaign
University of California, Riverside
Stanford University
University of Michigan
University of Chicago
New Jersey Institute of Technology
University of Illinois Chicago
University of Illinois Urbana-Champaign
University of California, Riverside

Demo

Chirpz in Action

Watch 1-min Demo

Why Chirpz

Why Using Chirpz

Traditional search drowns you in noise or misses critical papers. Chirpz understands your research to surface exactly what you need.

What are the latest advances in diffusion models?

I'll help you explore this scope. Let's start searching...

Stop Guessing Keywords

Talk naturally with the AI. Ask complex research questions in your own words — no more keyword guessing.

research_draft.pdf

12 pages • Uploaded

Gap Detected

Missing citations to support sections 3.1 and 3.3.

Spot Missing Citations

Upload your draft and let AI find gaps in your draft. Never get reviewer criticism for missing key references again.

280M+

Papers Across Databases

PubMed
arXiv
Journals & Conferences
One Search, All Sources

Search 280M+ papers across PubMed, arXiv, academic journals, and conference proceedings — all in one query.

1

Attention Is All You Need

98

2

BERT: Pre-training of Deep...

95

3

Language Models are Few...

92

Cut through the Noise

AI reads, ranks, and filters papers by true relevance to your work. Helps you read the most important papers first.

AI Snapshot

Key Contribution: Introduces self-attention mechanism replacing RNNs

Relevance: Foundational work for your transformer research

Skip the Skimming

Get instant AI summaries of every paper. Decide what to read in seconds, not hours.

Attention Is All You Need

Vaswani, A., Shazeer, N., et al.

2017 • NeurIPS

Cited 115K+
Open Access
Cite With Confidence

Every result is real and verified — zero hallucination. Get complete metadata, ready to export BibTeX, and more.

How it Works

Ask, Discover, Review, Cite.

Zero friction. Total focus.

STEP 01

Ask or Upload

Type your research question or upload a file for analysis. The AI understands your context — no complex search syntax needed.

research-draft.pdfPDF
Analyze the attached draft and identify areas lacking sufficient supporting evidence or citations.

STEP 02

AI Scopes Research

The AI Instantly extracts key research scopes and crafts smart search strategies — so your research starts targeted and fast.

Scope
Research Focus
Rationale
1
Uncertainty quantification methods in LLMs
Core methodological foundation
2
Calibration and overconfidence studies
Addresses reliability concerns
3
Epistemic vs Aleatoric uncertainty
Theoretical framework distinction

Generating

STEP 03

Discover Papers

Relevant papers are discovered in real-time and ranked by relevance to your research scope.

STEP 04

Review & Cite

Browse, review, filter, and dive deep into discovered papers — all with rich, accurate metadata. Export citations with a single click.

Learning Conformal Abstention Policies for Adaptive Risk Management in Large Language and Vision-Language Models

Sina Tayebati, Divake Kumar, Nastaran Darabi, Dinithi Jayasuriya, Ranganath Krishnan, Amit Ranjan Trivedi

2025-02-08

arXiv (Cornell University)

Cited by 8

Type preprint

Open Access

Large Language and Vision-Language Models (LLMs/VLMs) are increasingly used in safety-critical applications, yet their opaque decision-making complicates risk assessment and reliability. Uncertainty quantification (UQ) helps assess prediction confidence and enables abstention when uncertainty is high. Conformal prediction (CP), a leading UQ method, provides statistical guarantees but relies on static thresholds, which fail to adapt to task complexity and evolving data distributions, leading to suboptimal trade-offs in accuracy, coverage, and informativeness. To address this, we propose learnable conformal abstention, integrating reinforcement learning (RL) with CP to optimize abstention thresholds dynamically. By treating CP thresholds as adaptive actions, our approach balances multiple objectives, minimizing prediction set size while maintaining reliable coverage. Extensive evaluations across diverse LLM/VLM benchmarks show our method outperforms Least Ambiguous Classifiers (LAC) and Adaptive Prediction Sets (APS), improving accuracy by up to 3.2%, boosting AUROC for hallucination detection by 22.19%, enhancing uncertainty-guided selective generation (AUARC) by 21.17%, and reducing calibration error by 70%-85%. These improvements hold across multiple models and datasets while consistently meeting the 90% coverage target, establishing our approach as a more effective and flexible solution for reliable decision-making in safety-critical applications. The code is available at: https://github.com/sinatayebati/vlm-uncertainty.

Who Benefits

Who is Chirpz for

From individual researchers to entire institutions, Chirpz empowers everyone in the research ecosystem.

Chirpz Research Platform

Accelerate literature reviews and identify research gaps for multiple projects and grant proposals.

Explore emerging tech, track competitive research, and validate approaches with comprehensive analysis.

Streamline drug discovery, monitor clinical trials, and ensure comprehensive patent searches.

Focus on insights, discover papers faster and manage citations effortlessly.

Build strong foundations for your thesis with accurate citations and AI-guided discovery.

Make progress on your research, today

Start for free

No credit card required

Cancel anytime

Get started - it's free

FAQ

Frequently Asked Questions

Everything you need to know about Chirpz

Chirpz is an intelligent literature discovery agent that helps you search, discover, and analyze academic papers across multiple databases using natural conversations. It understands your research context and data.

Yes! Upload your PDF draft and Chirpz identifies gaps in your literature review, detects sections lacking citations, and suggests relevant papers to strengthen your claims.

Chirpz searches 280M+ papers across PubMed, arXiv, journals, and conferences all at once. Ask your research question naturally — no need for complex search syntax or multiple database searches.

Chirpz reads, ranks, and filters papers by true relevance to your research, not just citation counts. It understands details of each paper and helps you cut through the noise.

Complete metadata (title, abstract, authors, date, journal, citations), AI-generated summaries, ready-to-export BibTeX citations, and PDF previews when available.

Absolutely. Every result is real and verified — zero hallucinations. All papers come from established databases with authentic, accurate metadata you can cite with confidence.

Yes! Start with 200 free credits per month. Pro plan unlocks unlimited requests and advanced features for power users and research teams.