Welcome to Tailwinds API Streaming Learn when you can stream back to your front end
Streaming in Tailwinds allows for real-time token delivery as they become available, enhancing the responsiveness and user experience of your AI applications. This guide will walk you through configuring and using API streaming with Tailwinds.
How Streaming Works
When streaming is enabled for a prediction request, Tailwinds sends tokens as data-only server-sent events as soon as they are generated. This approach provides a more dynamic and interactive experience for users.
Configuring Streaming
Python Curl JavaScript
Here's how you can implement streaming using Python's requests
library:
Copy import json
import requests
def stream_prediction ( chatflow_id , question ):
url = f "https://your-tailwinds-instance.com/api/v1/predictions/ { chatflow_id } "
payload = {
"question" : question ,
"streaming" : True
}
headers = {
"Content-Type" : "application/json"
}
with requests . post (url, json = payload, headers = headers, stream = True ) as response :
for line in response . iter_lines ():
if line :
decoded_line = line . decode ( 'utf-8' )
if decoded_line . startswith ( 'data: ' ):
data = json . loads (decoded_line[ 6 :])
if isinstance (data, dict ) and 'token' in data :
print (data[ 'token' ], end = '' , flush = True )
elif decoded_line . startswith ( 'event: error' ):
print ( f " \n Error: { decoded_line } " )
break
elif decoded_line == 'event: end' :
print ( "\nStream ended" )
break
# Usage
stream_prediction ( "your-chatflow-id" , "Hello world!" )
To enable streaming with cURL, set the streaming
parameter to true
in your JSON payload. Here's an example:
Copy curl https://your-tailwinds-instance.com/api/v1/predictions/{chatflow-id} \
-H "Content-Type: application/json" \
-d '{
"question": "Hello world!",
"streaming": true
}'
Here's how you can implement streaming using JavaScript with the Fetch API:
Copy async function streamPrediction (chatflowId , question) {
const url = `https://your-tailwinds-instance.com/api/v1/predictions/ ${ chatflowId } ` ;
const payload = {
question : question ,
streaming : true
};
try {
const response = await fetch (url , {
method : 'POST' ,
headers : {
'Content-Type' : 'application/json'
} ,
body : JSON .stringify (payload)
});
const reader = response . body .getReader ();
const decoder = new TextDecoder ();
while ( true ) {
const { done , value } = await reader .read ();
if (done) break ;
const chunk = decoder .decode (value);
const lines = chunk .split ( '\n' );
for ( const line of lines) {
if ( line .startsWith ( 'data: ' )) {
try {
const data = JSON .parse ( line .slice ( 6 ));
if ( data .token) {
process . stdout .write ( data .token);
}
} catch (error) {
console .error ( 'Error parsing JSON:' , error);
}
} else if ( line .startsWith ( 'event: error' )) {
console .error ( '\nError:' , line);
break ;
} else if (line === 'event: end' ) {
console .log ( '\nStream ended' );
break ;
}
}
}
} catch (error) {
console .error ( 'Fetch error:' , error);
}
}
// Usage
streamPrediction ( 'your-chatflow-id' , 'Hello world!' );
Understanding the Event Stream
A prediction's event stream consists of the following event types:
Indicates the start of streaming
Emitted when a new token is available
Emitted if an error occurs during prediction
Signals the end of the prediction stream
Contains chatId, messageId, etc. Sent after all tokens and before the end event
Emitted when the flow returns sources from a vector store
Emitted when the flow uses tools during prediction
Example of a Token Event
Copy event: token
data: Once upon a time...
Best Practices
Error Handling : Always implement proper error handling to manage potential issues during streaming.
Buffering : Consider implementing a buffer on the client-side to smooth out the display of incoming tokens.
Timeout Management : Set appropriate timeouts to handle cases where the stream might unexpectedly end.
User Interface : Design your UI to gracefully handle incoming streamed data, providing a smooth experience for the end-user.
Last updated 2 months ago