Convert a URL to LLM-friendly input, by simply adding
r.jina.ai
in front.Reader API
Convert a URL to LLM-friendly input, by simply adding
r.jina.ai
in front. chevron_leftchevron_right
search
Use
s.jina.ai
to search a querykeyboard_arrow_down
fact_check
Use
g.jina.ai
for groundingscience
Experimental
keyboard_arrow_down
Add API Key for Higher Rate Limit
Use POST Method
Content Format
Default
arrow_drop_down
Timeout
Target Selector
Wait For Selector
Excluded Selector
Remove All Images
Gather All Links At the End
Gather All Images At the End
JSON Response
Forward Cookie
Image Caption
Use a Proxy Server
Bypass the Cache
Github Flavored Markdown
Enabled
arrow_drop_down
Stream Mode
Browser Locale
Enable iframe Extraction
Enable Shadow DOM Extraction
Local PDF/HTML file
POST
upload
Pre-Execute Custom JavaScript
POST
upload
Request
Bash
Language
arrow_drop_down
curl https://r.jina.ai/https://example.com
upload
Request (javascript)
fetch('https://r.jina.ai/https://example.com', {
method: 'GET',
})
key
API key
visibility_off
Available tokens
0
Feeding web information into LLMs is an important step of grounding, yet it can be challenging. The simplest method is to scrape the webpage and feed the raw HTML. However, scraping can be complex and often blocked, and raw HTML is cluttered with extraneous elements like markups and scripts. The Reader API addresses these issues by extracting the core content from a URL and converting it into clean, LLM-friendly text, ensuring high-quality input for your agent and RAG systems.
Enter your URL
Reader URL
Pose a Question
Reader allows you to feed your LLM with the latest information from the web. Simply prepend https://s.jina.ai/ to your query, and Reader will search the web and return the top five results with their URLs and contents, each in clean, LLM-friendly text. This way, you can always keep your LLM up-to-date, improve its factuality, and reduce hallucinations.
Enter your query
Reader URL
info Please note that unlike the demo shown above, in practice you do not search the original question on the web for grounding. What people often do is rewrite the original question or use multi-hop questions. They read the retrieved results and then generate additional queries to gather more information as needed before arriving at a final answer.
The new grounding endpoint offers an end-to-end, near real-time fact-checking experience. It takes a given statement, grounds it using real-time web search results, and returns a factuality score and the exact references used. You can easily ground statements to reduce LLM hallucinations or improve the integrity of human-written content.
Images on the webpage are automatically captioned using a vision language model in the reader and formatted as image alt tags in the output. This gives your downstream LLM just enough hints to incorporate those images into its reasoning and summarizing processes. This means you can ask questions about the images, select specific ones, or even forward their URLs to a more powerful VLM for deeper analysis!
Yes, Reader natively supports PDF reading. It's compatible with most PDFs, including those with many images, and it's lightning fast! Combined with an LLM, you can easily build a ChatPDF or document analysis AI in no time.
The best part? It's free!
Reader API is available for free and offers flexible rate limit and pricing. Built on a scalable infrastructure, it offers high accessibility, concurrency, and reliability. We strive to be your preferred grounding solution for your LLMs.
Rate Limit
Columns
arrow_drop_down
Product | API Endpoint | Descriptionarrow_upward | w/o API Key | w/ API Key | w/ Premium API Key | Average Latency | Token Usage Counting | Allowed Request | |
---|---|---|---|---|---|---|---|---|---|
Embedding API | https://api.jina.ai/v1/embeddings | Convert text/images to fixed-length vectors | block | 500 RPM & 1,000,000 TPM | 2,000 RPM & 5,000,000 TPM | bolt depends on the input size help | Count the number of tokens in the input request. | POST | |
Reranker API | https://api.jina.ai/v1/rerank | Tokenize and segment long text | block | 500 RPM & 1,000,000 TPM | 2,000 RPM & 5,000,000 TPM | bolt depends on the input size help | Count the number of tokens in the input request. | POST | |
Reader API | https://r.jina.ai | Convert URL to LLM-friendly text | 20 RPM | 200 RPM | 1000 RPM | 4.6s | Count the number of tokens in the output response. | GET/POST | |
Reader API | https://s.jina.ai | Search the web and convert results to LLM-friendly text | block | 40 RPM | 100 RPM | 8.7s | Count the number of tokens in the output response. | GET/POST | |
Reader API | https://g.jina.ai | Grounding a statement with web knowledge | block | 10 RPM | 30 RPM | 22.7s | Count the total number of tokens in the whole process. | GET/POST | |
Classifier API (Zero-shot) | https://api.jina.ai/v1/classify | Classify inputs using zero-shot classification | block | 200 RPM & 500,000 TPM | 1,000 RPM & 3,000,000 TPM | bolt depends on the input size | Tokens counted as: input_tokens + label_tokens | POST | |
Classifier API (Few-shot) | https://api.jina.ai/v1/classify | Classify inputs using a trained few-shot classifier | block | 20 RPM & 200,000 TPM | 60 RPM & 1,000,000 TPM | bolt depends on the input size | Tokens counted as: input_tokens | POST | |
Classifier API | https://api.jina.ai/v1/train | Train a classifier using labeled examples | block | 20 RPM & 200,000 TPM | 60 RPM & 1,000,000 TPM | bolt depends on the input size | Tokens counted as: input_tokens × num_iters | POST | |
Segmenter API | https://segment.jina.ai | Tokenize and segment long text | 20 RPM | 200 RPM | 1,000 RPM | 0.3s | Token is not counted as usage. | GET/POST |
Don't panic! Every new API key contains one million free tokens!
API Pricing
API pricing is based on token usage - input tokens for standard APIs and output tokens for Reader API. One API key gives you access to all search foundation products.
Rate Limit
Columns
arrow_drop_down
Product | API Endpoint | Descriptionarrow_upward | w/o API Key | w/ API Key | w/ Premium API Key | Average Latency | Token Usage Counting | Allowed Request | |
---|---|---|---|---|---|---|---|---|---|
Embedding API | https://api.jina.ai/v1/embeddings | Convert text/images to fixed-length vectors | block | 500 RPM & 1,000,000 TPM | 2,000 RPM & 5,000,000 TPM | bolt depends on the input size help | Count the number of tokens in the input request. | POST | |
Reranker API | https://api.jina.ai/v1/rerank | Tokenize and segment long text | block | 500 RPM & 1,000,000 TPM | 2,000 RPM & 5,000,000 TPM | bolt depends on the input size help | Count the number of tokens in the input request. | POST | |
Reader API | https://r.jina.ai | Convert URL to LLM-friendly text | 20 RPM | 200 RPM | 1000 RPM | 4.6s | Count the number of tokens in the output response. | GET/POST | |
Reader API | https://s.jina.ai | Search the web and convert results to LLM-friendly text | block | 40 RPM | 100 RPM | 8.7s | Count the number of tokens in the output response. | GET/POST | |
Reader API | https://g.jina.ai | Grounding a statement with web knowledge | block | 10 RPM | 30 RPM | 22.7s | Count the total number of tokens in the whole process. | GET/POST | |
Classifier API (Zero-shot) | https://api.jina.ai/v1/classify | Classify inputs using zero-shot classification | block | 200 RPM & 500,000 TPM | 1,000 RPM & 3,000,000 TPM | bolt depends on the input size | Tokens counted as: input_tokens + label_tokens | POST | |
Classifier API (Few-shot) | https://api.jina.ai/v1/classify | Classify inputs using a trained few-shot classifier | block | 20 RPM & 200,000 TPM | 60 RPM & 1,000,000 TPM | bolt depends on the input size | Tokens counted as: input_tokens | POST | |
Classifier API | https://api.jina.ai/v1/train | Train a classifier using labeled examples | block | 20 RPM & 200,000 TPM | 60 RPM & 1,000,000 TPM | bolt depends on the input size | Tokens counted as: input_tokens × num_iters | POST | |
Segmenter API | https://segment.jina.ai | Tokenize and segment long text | 20 RPM | 200 RPM | 1,000 RPM | 0.3s | Token is not counted as usage. | GET/POST |
CC BY-NC License Self-Check
play_arrow
Are you using our official API or official images on Azure or AWS?
play_arrow
done
Yes
play_arrow
Are you using a paid API key or free trial key?
play_arrow
Are you using our official model images on AWS and Azure?
play_arrow
close
No
Reader-related common questions
What are the costs associated with using the Reader API?
keyboard_arrow_down
How does the Reader API function?
keyboard_arrow_down
Is the Reader API open source?
keyboard_arrow_down
What is the typical latency for the Reader API?
keyboard_arrow_down
Why should I use the Reader API instead of scraping the page myself?
keyboard_arrow_down
Does the Reader API support multiple languages?
keyboard_arrow_down
What should I do if a website blocks the Reader API?
keyboard_arrow_down
Can the Reader API extract content from PDF files?
keyboard_arrow_down
Can the Reader API process media content from web pages?
keyboard_arrow_down
Is it possible to use the Reader API on local HTML files?
keyboard_arrow_down
Does Reader API cache the content?
keyboard_arrow_down
Can I use the Reader API to access content behind a login?
keyboard_arrow_down
Can I use the Reader API to access PDF on arXiv?
keyboard_arrow_down
How does image caption work in Reader?
keyboard_arrow_down
What is the scalability of the Reader? Can I use it in production?
keyboard_arrow_down
What is the rate limit of the Reader API?
keyboard_arrow_down
API-related common questions
code
Can I use the same API key for embedding, reranking, reader, fine-tuning APIs?
keyboard_arrow_down
code
Can I monitor the token usage of my API key?
keyboard_arrow_down
code
What should I do if I forget my API key?
keyboard_arrow_down
code
Do API keys expire?
keyboard_arrow_down
code
Why is the first request for some models slow?
keyboard_arrow_down
code
Is user input data used for training your models?
keyboard_arrow_down
Billing-related common questions
attach_money
Is billing based on the number of sentences or requests?
keyboard_arrow_down
attach_money
Is there a free trial available for new users?
keyboard_arrow_down
attach_money
Are tokens charged for failed requests?
keyboard_arrow_down
attach_money
What payment methods are accepted?
keyboard_arrow_down
attach_money
Is invoicing available for token purchases?
keyboard_arrow_down