Skip to main content

importData

The importData route in the Flexibel API is designed to facilitate the import of various data sources into the AI chatbot system. This route is essential for providing the chatbot with the necessary information to deliver accurate and relevant responses.

POST https://api.flexibel.ai/v0/importData

Request

To import data using curl:

curl https://api.flexibel.ai/v0/importData \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d '{
"type": "DATA_SOURCE_TYPE",
"url": "DATA_SOURCE_URL",
"file": "FILE_PATH",
"includePaths": ["PATHS_TO_INCLUDE"],
"excludePaths": ["PATHS_TO_EXCLUDE"]
}'

Response

The response confirms the successful import of the data source:

{
"message": "Data successfully imported.",
"type": "DATA_SOURCE_TYPE"
}

Request Body

The table below outlines the parameters for the importData route:

ParameterTypeRequiredDescription
typestringyesThe type of data source (e.g., url, pdf).
urlstringnoThe URL of the source. This parameter must be included if the selected type requires a url
filestringnoThe file containing the data. This parameter must be included if the selected type requires a file
includePathsarraynoPaths to include if the type is 'webCrawler'.
excludePathsarraynoPaths to exclude if the type is 'webCrawler'.

Types

URL

The URL data import type allows you to import content directly from a specific webpage. This is ideal for when you want to provide the chatbot with information from a particular online source or a specific page on your website.

To import data from a single URL, specify the type as url and provide the URL of the page.

{
"type": "url",
"url": "https://www.example.com/article"
}

Web Crawler

The web crawler type is used for comprehensive data ingestion from a website. It crawls through the site, following links and ingesting content from multiple pages. You can customize the crawl by specifying which paths to include or exclude, allowing for targeted data collection.

For a web crawler import, include the base URL and specify the paths to include and/or exclude in the crawling process.

{
"type": "webCrawler",
"url": "https://www.example.com",
"includePaths": ["/products/*", "/about-us"],
"excludePaths": ["/privacy-policy", "/terms-of-use"]
}

pdf

The PDF import type allows for the ingestion of content from PDF documents. This method is essential for incorporating detailed information from reports, manuals, or any PDF-based materials into the chatbot's knowledge base. Given the nature of PDF files, they need to be converted into a readable stream before being sent to the API.

In a Node.js environment, use the fs module to create a readable stream from your PDF file. This stream is then used in the API request to import the PDF content.

const fs = require("fs");
const filePath = "/path/to/your/document.pdf";
const fileStream = fs.createReadStream(filePath);

Then, use this stream in your API request body:

{
"type": "pdf",
"file": "fileStream"
}
note

Ensure the server making the API request can access the file path. The file stream should be attached appropriately to the request body, adhering to the specifics of the HTTP client or request library you are using.