a day ago

'SSE functions properly on a local environment, but fails to work when deployed on Vercel.'

My task is to make a request to OpenAI through a proxy server, which returns a Readable Stream object. When the data is received in small chunks, the SSE connection is established, and the 'Transfer-Encoding' header is also present with a value of 'chunked'. However, although the code works correctly in the local deployment, it experiences some unexpected changes while deployed on Vercel. Here, the data arrives in one big chunk, and the 'Transfer-Encoding' header is replaced with the 'Content-Length' header which isn't anticipated. Instead of establishing the SSE connection, the server handles it like a typical REST API call.'/completions', (req, res) => {
    res.statusCode = 200;
    res.setHeader('Content-Type', 'text/event-stream');
    res.setHeader('Transfer-Encoding', 'chunked');
    res.setHeader('Cache-Control', 'no-cache');
    res.setHeader('X-Accel-Buffering', 'no');
    res.setHeader('Connection', 'keep-alive');

    const headers = {
        'Authorization': `Bearer MYAUTHTOKEN`
    const body = {
        'messages': []

            headers: headers,
            responseType: 'stream'
    .then((openairesponse) => {;
    .catch((err) => {
a day ago
Verified Answer
It seems like the issue might be with the way Vercel handles streaming responses. Vercel may be buffering the entire response before sending it to the client. One solution to try would be to remove the 'Transfer-Encoding': 'chunked' header from the server code and see if that fixes the issue. You may also want to try setting the 'Content-Length' header to a very high number to try and force Vercel to use chunked encoding. Another option could be to use a different hosting provider that supports the type of streaming response you are trying to send.