firecrawl
从任何网站上使用干净的数据授权您的AI应用程序。具有高级刮擦,爬网和数据提取功能。
该存储库正在开发中,我们仍将自定义模块集成到单声道仓库中。它还没有完全准备进行自托管部署,但是您可以在本地运行它。
什么是firecrawl ?
firecrawl是一项API服务,它采用URL,将其爬网并将其转换为干净的降价或结构化数据。我们爬行所有可访问的子页面,并为您提供清洁数据。无需站点地图。查看我们的文档。
太平洋标准时间。嘿,你,加入我们的观星者:)
如何使用它?
我们为托管版本提供易于使用的API。您可以在此处找到操场和文档。如果愿意,您也可以自我托管后端。
查看以下资源以开始:
- API :文档
- SDK :Python,Node,Go,Rust
- LLM框架:Langchain(Python),Langchain(JS),Llama Index,Crew.ai,Composio,Praisonai,上线
- 低代码框架:dify,langflow,流动AI,货物,管道
- 其他:Zapier,Pabbly Connect
- 想要SDK还是集成?通过打开一个问题让我们知道。
要在本地运行,请参阅此处的指南。
API键
要使用API,您需要在firecrawl上注册并获取API键。
特征
- 刮擦:刮擦URL并以LLM就绪格式获取其内容(通过LLM提取物,屏幕截图,HTML)
- 爬网:删除网页的所有URL,并以LLM准备好的格式返回内容
- 地图:输入网站并获取所有网站URL-非常快
- 搜索:搜索网络并从结果中获取完整内容
- 提取:从单页,多个页面或带有AI的整个网站获取结构化数据。
强大的功能
- llm准备格式:降价,结构化数据,屏幕截图,html,链接,元数据
- 硬性内容:代理,反机制,动态内容(JS渲染),输出解析,编排
- 可定制性:排除标签,带有自定义标头的验证墙后面的爬网,最大爬网深度等...
- 媒体解析:PDF,DOCX,图像
- 可靠性首先:旨在获取所需的数据 - 无论多么困难
- 操作:在提取数据之前,单击,滚动,输入,等待等等
- 批处理(新) :同时使用新的异步端点刮擦数千个URL。
您可以找到所有firecrawl的功能以及如何在我们的文档中使用它们
爬行
用于爬网和所有可访问的子页面。这提交了爬网工作,并返回工作ID来检查爬网状态。
curl -X POST https://api.fire*cra*w*l.dev/v1/crawl
-H ' Content-Type: application/json '
-H ' Authorization: Bearer fc-YOUR_API_KEY '
-d ' {
"url": "https://docs.f*irec*r*awl.dev",
"limit": 10,
"scrapeOptions": {
"formats": ["markdown", "html"]
}
} '返回爬网工作ID和URL,以检查爬网状态。
{
"success" : true ,
"id" : " 123-456-789 " ,
"url" : " https://api.fire*cra*w*l.dev/v1/crawl/123-456-789 "
}检查爬网工作
用于检查爬网工作的状态并获得结果。
curl -X GET https://api.fire*cra*w*l.dev/v1/crawl/123-456-789
-H ' Content-Type: application/json '
-H ' Authorization: Bearer YOUR_API_KEY ' {
"status" : " completed " ,
"total" : 36 ,
"creditsUsed" : 36 ,
"expiresAt" : " 2024-00-00T00:00:00.000Z " ,
"data" : [
{
"markdown" : " [ firecrawl Docs home page!... " ,
"html" : " " en " class= " js-focus-visible lg:[--scroll-mt:9.5rem] " data-js-focus-visible= "" >... " ,
"metadata" : {
"title" : " Build a 'Chat with website' using Groq Llama 3 | firecrawl " ,
"language" : " en " ,
"sourceURL" : " https://docs.f*irec*r*awl.dev/learn/rag-llama3 " ,
"description" : " Learn how to use firecrawl , Groq Llama 3, and Langchain to build a 'Chat with your website' bot. " ,
"ogLocaleAlternate" : [],
"statusCode" : 200
}
}
]
}
刮擦
用于刮擦URL并以指定格式获取其内容。
curl -X POST https://api.*f*i*recrawl.dev/v1/scrape
-H ' Content-Type: application/json '
-H ' Authorization: Bearer YOUR_API_KEY '
-d ' {
"url": "https://docs.f*irec*r*awl.dev",
"formats" : ["markdown", "html"]
} '回复:
{
"success" : true ,
"data" : {
"markdown" : " Launch Week I is here! [See our Day 2 Release ](https://www.*fir*ec*rawl.dev/blog/launch-week-i-day-2-doubled-rate-limits)[? Get 2 months free... " ,
"html" : " " en " class= " light " style= " color-scheme: light; " >" __variable_36bd41 __variable_d7dc5d font-inter ... " ,
"metadata" : {
"title" : " Home - firecrawl " ,
"description" : " firecrawl crawls and converts any website into clean markdown. " ,
"language" : " en " ,
"keywords" : " firecrawl ,Markdown,Data,Mendable,Langchain " ,
"robots" : " follow, index " ,
"ogTitle" : " firecrawl " ,
"ogDescription" : " Turn any website into LLM-ready data. " ,
"ogUrl" : " https://www.f*i*re*crawl.dev/ " ,
"ogImage" : " https://www.f*i*re*crawl.dev/og.png?123 " ,
"ogLocaleAlternate" : [],
"ogSiteName" : " firecrawl " ,
"sourceURL" : " https://fir*ecr*awl*.dev " ,
"statusCode" : 200
}
}
}
地图
用于映射URL并获取网站的URL。这返回网站上存在的大多数链接。
curl -X POST https://api.fir*ecr*a*wl.dev/v1/map
-H ' Content-Type: application/json '
-H ' Authorization: Bearer YOUR_API_KEY '
-d ' {
"url": "https://fir*ecr*awl*.dev"
} '回复:
{
"status" : " success " ,
"links" : [
" https://fir*ecr*awl*.dev " ,
" https://www.f*i*re*crawl.dev/pricing " ,
" https://www.f*i*re*crawl.dev/blog " ,
" https://www.f*i*re*crawl.dev/playground " ,
" https://www.f*i*re*crawl.dev/smart-crawl " ,
]
}搜索地图
带有搜索参数的地图使您可以在网站内搜索特定的URL。
curl -X POST https://api.fir*ecr*a*wl.dev/v1/map
-H ' Content-Type: application/json '
-H ' Authorization: Bearer YOUR_API_KEY '
-d ' {
"url": "https://fir*ecr*awl*.dev",
"search": "docs"
} '响应将是与最不相关的最相关的有序列表。
{
"status" : " success " ,
"links" : [
" https://docs.f*irec*r*awl.dev " ,
" https://docs.f*irec*r*awl.dev/sdks/python " ,
" https://docs.f*irec*r*awl.dev/learn/rag-llama3 " ,
]
}搜索
搜索网络并从结果中获取完整的内容
firecrawl的搜索API允许您执行Web搜索,并在一个操作中(可选)刮擦搜索结果。
- 选择特定的输出格式(Markdown,HTML,链接,屏幕截图)
- 使用可自定义参数(语言,国家等)搜索网络
- 可选地从搜索结果中检索各种格式的内容
- 控制结果的数量并设置超时
curl -X POST https://api.f*irecraw**l.dev/v1/search
-H " Content-Type: application/json "
-H " Authorization: Bearer fc-YOUR_API_KEY "
-d ' {
"query": "what is firecrawl ?",
"limit": 5
} '
回复
{
"success" : true ,
"data" : [
{
"url" : " https://fir*ecr*awl*.dev " ,
"title" : " firecrawl | Home Page " ,
"description" : " Turn websites into LLM-ready data with firecrawl "
},
{
"url" : " https://docs.f*irec*r*awl.dev " ,
"title" : " Documentation | firecrawl " ,
"description" : " Learn how to use firecrawl in your own applications "
}
]
}
内容刮擦
curl -X POST https://api.f*irecraw**l.dev/v1/search
-H " Content-Type: application/json "
-H " Authorization: Bearer fc-YOUR_API_KEY "
-d ' {
"query": "what is firecrawl ?",
"limit": 5,
"scrapeOptions": {
"formats": ["markdown", "links"]
}
} '
提取(beta)
通过提示和/或模式从整个网站获取结构化数据。
您可以从一个或多个URL中提取结构化数据,包括通配符:
单页:示例:https://firecrawl.dev/some-page
多页/完整域示例:https://firecrawl.dev/*
当您使用 /*时, firecrawl将自动爬网并解析它可以在该域中发现的所有URL,然后提取所需的数据。
curl -X POST https://api.f**ire*crawl.dev/v1/extract
-H ' Content-Type: application/json '
-H ' Authorization: Bearer YOUR_API_KEY '
-d ' {
"urls": [
"https://fir*ecr*awl*.dev/*",
"https://docs.f*irec*r*awl.dev/",
"https://www.ycomb*in*a*tor.com/companies"
],
"prompt": "Extract the company mission, whether it is open source, and whether it is in Y Combinator from the page.",
"schema": {
"type": "object",
"properties": {
"company_mission": {
"type": "string"
},
"is_open_source": {
"type": "boolean"
},
"is_in_yc": {
"type": "boolean"
}
},
"required": [
"company_mission",
"is_open_source",
"is_in_yc"
]
}
} ' {
"success" : true ,
"id" : " 44aa536d-f1cb-4706-ab87-ed0386685740 " ,
"urlTrace" : []
}如果您使用的是SDK,它将自动为您提取响应:
{
"success" : true ,
"data" : {
"company_mission" : " firecrawl is the easiest way to extract data from the web. Developers use us to reliably convert URLs into LLM-ready markdown or structured data with a single API call. " ,
"supports_sso" : false ,
"is_open_source" : true ,
"is_in_yc" : true
}
}
LLM提取(beta)
用于从刮擦页面中提取结构化数据。
curl -X POST https://api.*f*i*recrawl.dev/v1/scrape
-H ' Content-Type: application/json '
-H ' Authorization: Bearer YOUR_API_KEY '
-d ' {
"url": "https://www.m**en*dable.ai/",
"formats": ["json"],
"jsonOptions": {
"schema": {
"type": "object",
"properties": {
"company_mission": {
"type": "string"
},
"supports_sso": {
"type": "boolean"
},
"is_open_source": {
"type": "boolean"
},
"is_in_yc": {
"type": "boolean"
}
},
"required": [
"company_mission",
"supports_sso",
"is_open_source",
"is_in_yc"
]
}
}
} ' {
"success" : true ,
"data" : {
"content" : " Raw Content " ,
"metadata" : {
"title" : " Mendable " ,
"description" : " Mendable allows you to easily build AI chat applications. Ingest, customize, then deploy with one line of code anywhere you want. Brought to you by SideGuide " ,
"robots" : " follow, index " ,
"ogTitle" : " Mendable " ,
"ogDescription" : " Mendable allows you to easily build AI chat applications. Ingest, customize, then deploy with one line of code anywhere you want. Brought to you by SideGuide " ,
"ogUrl" : " https://**mendab*le.ai/ " ,
"ogImage" : " https://**mendab*le.ai/mendable_new_og1.png " ,
"ogLocaleAlternate" : [],
"ogSiteName" : " Mendable " ,
"sourceURL" : " https://**mendab*le.ai/ "
},
"json" : {
"company_mission" : " Train a secure AI on your technical resources that answers customer and employee questions so your team doesn't have to " ,
"supports_sso" : true ,
"is_open_source" : false ,
"is_in_yc" : true
}
}
}没有模式的提取(新)
现在,您可以通过将提示传递到端点,而无需提取架构。 LLM选择数据的结构。
curl -X POST https://api.*f*i*recrawl.dev/v1/scrape
-H ' Content-Type: application/json '
-H ' Authorization: Bearer YOUR_API_KEY '
-d ' {
"url": "https://docs.f*irec*r*awl.dev/",
"formats": ["json"],
"jsonOptions": {
"prompt": "Extract the company mission from the page."
}
} '与页面与操作(仅云)进行交互
firecrawl允许您在刮擦其内容之前在网页上执行各种操作。这对于与动态内容,通过页面导航或访问需要用户交互的内容特别有用。
这是如何使用操作导航到Google.com,搜索firecrawl ,单击第一个结果并进行屏幕截图的示例。
curl -X POST https://api.*f*i*recrawl.dev/v1/scrape
-H ' Content-Type: application/json '
-H ' Authorization: Bearer YOUR_API_KEY '
-d ' {
"url": "google.com",
"formats": ["markdown"],
"actions": [
{"type": "wait", "milliseconds": 2000},
{"type": "click", "selector": "textarea[title="Search"]"},
{"type": "wait", "milliseconds": 2000},
{"type": "write", "text": " firecrawl "},
{"type": "wait", "milliseconds": 2000},
{"type": "press", "key": "ENTER"},
{"type": "wait", "milliseconds": 3000},
{"type": "click", "selector": "h3"},
{"type": "wait", "milliseconds": 3000},
{"type": "screenshot"}
]
} '
批次刮擦多个URL(新)
现在,您可以同时批次刮擦多个URL。它与 /爬网端点的工作方式非常相似。它提交了批次刮擦作业,并返回一个工作ID来检查批次刮擦状态。
curl -X POST https://api.*fire*cra*wl.dev/v1/batch/scrape
-H ' Content-Type: application/json '
-H ' Authorization: Bearer YOUR_API_KEY '
-d ' {
"urls": ["https://docs.f*irec*r*awl.dev", "https://docs.f*irec*r*awl.dev/sdks/overview"],
"formats" : ["markdown", "html"]
} ' 使用Python SDK
安装Python SDK
pip install firecrawl -py爬网站
from firecrawl . firecrawl import firecrawl App
from firecrawl . firecrawl import ScrapeOptions
app = firecrawl App ( api_key = "fc-YOUR_API_KEY" )
# Scrape a website:
scrape_status = app . scrape_url (
'https://fir*ecr*awl*.dev' ,
formats = [ "markdown" , "html" ]
)
print ( scrape_status )
# Crawl a website:
crawl_status = app . crawl_url (
'https://fir*ecr*awl*.dev' ,
limit = 100 ,
scrape_options = ScrapeOptions (
formats = [ "markdown" , "html" ],),
poll_interval = 30
)
print ( crawl_status )
从URL提取结构化数据
使用LLM提取,您可以轻松从任何URL中提取结构化数据。我们支持Pydantic模式,使您也更容易。这是您的使用方式:
class ArticleSchema ( BaseModel ):
title : str
points : int
by : str
commentsURL : str
class TopArticlesSchema ( BaseModel ):
top : List [ ArticleSchema ] = Field (..., description = "Top 5 stories" )
json_config = JsonConfig ( schema = TopArticlesSchema . model_json_schema ())
llm_extraction_result = app . scrape_url ( 'https://news.yc*o*m*binator.com' , formats = [ "json" ], json = json_config )
print ( llm_extraction_result . json )使用节点SDK
安装
要安装firecrawl节点SDK,您可以使用NPM:
npm install @mendable/ firecrawl -js用法
- 从firecrawl .dev获取API键
- 将API密钥设置为名为firecrawl _api_key或将其作为参数传递给firecrawl应用程序类的环境变量。
import firecrawl App , { CrawlParams , CrawlStatusResponse } from '@mendable/ firecrawl -js' ;
const app = new firecrawl App ( { apiKey : "fc-YOUR_API_KEY" } ) ;
// Scrape a website
const scrapeResponse = await app . scrapeUrl ( 'https://fir*ecr*awl*.dev' , {
formats : [ 'markdown' , 'html' ] ,
} ) ;
if ( scrapeResponse ) {
console . log ( scrapeResponse )
}
// Crawl a website
const crawlResponse = await app . crawlUrl ( 'https://fir*ecr*awl*.dev' , {
limit : 100 ,
scrapeOptions : {
formats : [ 'markdown' , 'html' ] ,
}
} satisfies CrawlParams , true , 30 ) satisfies CrawlStatusResponse ;
if ( crawlResponse ) {
console . log ( crawlResponse )
}
从URL提取结构化数据
使用LLM提取,您可以轻松从任何URL中提取结构化数据。我们支持ZOD模式,使您也更容易。这是如何使用它:
import firecrawl App from "@mendable/ firecrawl -js" ;
import { z } from "zod" ;
const app = new firecrawl App ( {
apiKey : "fc-YOUR_API_KEY"
} ) ;
// Define schema to extract contents into
const schema = z . object ( {
top : z
. array (
z . object ( {
title : z . string ( ) ,
points : z . number ( ) ,
by : z . string ( ) ,
commentsURL : z . string ( ) ,
} )
)
. length ( 5 )
. describe ( "Top 5 stories on Hacker News" ) ,
} ) ;
const scrapeResult = await app . scrapeUrl ( "https://news.*y*co*mbinator.com" , {
jsonOptions : { extractionSchema : schema } ,
} ) ;
console . log ( scrapeResult . data [ "json" ] ) ;
开源与云产品
firecrawl是根据AGPL-3.0许可证获得的开源。
为了提供最佳的产品,我们将提供托管版的firecrawl以及我们的开源产品。云解决方案使我们能够为所有用户不断创新并保持高质量,可持续的服务。
firecrawl Cloud可在firecrawl .dev上获得,并提供了一系列开源版本中不可用的功能:
贡献
我们喜欢贡献!在提交拉动请求之前,请阅读我们的贡献指南。如果您想自我主持,请参阅《自托指南》。
最终用户的唯一责任是在用firecrawl刮擦,搜索和爬行时尊重网站的政策。建议用户在启动任何刮擦活动之前遵守网站的适用隐私政策和使用条款。默认情况下, firecrawl尊重网站robots.txt文件中指定的指令。通过利用firecrawl ,您明确同意遵守这些条件。
贡献者
许可证免责声明
该项目主要根据GNU AFFERO通用公共许可证v3.0(AGPL-3.0)许可,如该存储库的根目录中的许可证文件中指定。但是,该项目的某些组件是根据MIT许可证获得许可的。有关详细信息,请参阅这些特定目录中的许可证文件。
请注意:
- 除非另有说明,否则AGPL-3.0许可证适用于项目的所有部分。
- SDK和某些UI组件是根据MIT许可证获得许可的。有关详细信息,请参阅这些特定目录中的许可证文件。
- 在使用或贡献此项目时,请确保您遵守与您合作的特定组件的适当许可条款。
有关特定组件许可的更多详细信息,请参考各自目录中的许可证文件或与项目维护人员联系。
↑回到顶部↑
通过命令行克隆项目: