davinder_d_TAC davinder_d_TAC - 1 month ago 30x
Node.js Question

Socket hang up: Websocket GET HTTPS request to Twitter API

I'd like to make a

request from my Node server to Twitter, parse the response data, and send it through a
(ws, not wss) connection. Everything is working -- except that, after 30 seconds (I timed it, it's always 30 +/-1 seconds), the socket connection hangs up, and I get the following error stack:

Error: socket hang up
at createHangUpError (_http_client.js:200:15)
at TLSSocket.socketOnEnd (_http_client.js:292:23)
at emitNone (events.js:72:20)
at TLSSocket.emit (events.js:166:7)
at endReadableNT (_stream_readable.js:905:12)
at nextTickCallbackWith2Args (node.js:442:9)
at process._tickCallback (node.js:356:17)

I've been using the same server design with Twitter's public stream and it has worked well. It's only when implementing the
request that the hangups occur.

So far, I've tried the following solutions:

  • throttling request incoming rate waaaay down with a low count value in the Twitter query uri-encoded string; timeout occurs even with 1 tweet sent back to me, still after ~30 seconds

  • set a
    to my GET request options

  • tried a manually built (no
    module) agent for the request

  • set
    on the socket server

  • and a bunch of Node newbish things

all to no avail. The app keeps working beautifully for 30 seconds; then, it then hangs up.

My next lead: Twitter requires application-only authenticated requests be sent via HTTPS. I haven't made any provisions in my code for the different security level. I'm going to follow up on that now -- and see if the SO community has any thoughts. Any help you can offer is hugely appreciated!

Here's the code, stripped down to bare essentials:

// Initialize basic server for the web socket requests
var handShakeServer = http.createServer(function(request, response) {
console.log((new Date()) + ' Received request for ' + request.url);

handShakeServer.listen(8080, function() {
console.log((new Date()) + ' Socket server is listening on port 8080');

// Initialize the socket server itself.
var socketServer = new WebSocketServer({
httpServer: handShakeServer,
autoAcceptConnections: false,
keepAliveInterval: (3600 * 1000),
dropConnectionOnKeepaliveTimeout: false

// On request, listen for messages.
socketServer.on('request', function(request) {

// Initialize connection from verified origin
var connection = request.accept('echo-protocol', request.origin);

// On message (search params from client), query Twitter.
connection.on('message', function(message) {

// BEARER_ACCESS_TOKEN is for Twitter's application-only authentication
var options = {
'path': '/1.1/search/tweets.json?q=stuff',
'hostname': 'api.twitter.com',
'method': 'GET',
'headers': {
'Authorization': ('Bearer ' + process.env.BEARER_ACCESS_TOKEN),
'Accept': '*/*'
'agent': agent,
'port': 443,

// Query twitter via HTTPS GET, listen for response
var req = new https.request(options, function(res) {
var responseString = '';

// On data, concatenate the chunks into a whole JSON object
res.on('data', function(tweet) {
responseString += tweet;

// On completion of request, send data to be analyzed.
res.on('end', function() {

// Once returned, send data to client through socket connection.
var result = doSomeAnalysis(JSON.parse(responseString));


// The https request is done; terminate it.

Also, on the web socket client side, I have:

client.connect('ws://localhost:8080/', 'echo-protocol', 'twitterQuery');

And here are the modules of relevance in my server.js:

var util = require('util');
var https = require('https');
var http = require('http');
var WebSocketServer = require('websocket').server;
var HttpsAgent = require('agentkeepalive').HttpsAgent;


Ok! Turns out the HTTPS compatibility is not causing the hangups. When I completely decoupled the websocket connection logic from the Twitter request logic and put console.log dummy functions where they used to overlap, the Twitter query worked fine, but the websocket connection threw me the hangup error, exactly as before -- always after 30 seconds, and always irregardless of how much data was being sent over the connection.

The fix: Manually set in ping/pong measures, in one of two (or both, for extra redundancy) ways:


    // Option 1: Use the npm module's event 'ping'
    connection.on('ping', function(data){
        console.log('Server: ping received at: ' + (new Date()))        

    // Option 2: Use the more generalized event 'message', look for your
    // custom 'ping' message
    connection.on('message', function(message){ 
        console.log("Socket server received message from client.")
        if (message.utf8Data == 'ping'){

        if (connection.connected){
            // To match Option 1 above, use this: 
            // To match Option 2 above, use this: 
    }, 19900)

    connection.on('pong', function(){
        console.log('Client: pong received at: ' + (new Date()) + '\n')

So, in the end, a very basic fix. I don't get why the connection needs these keep alive pings to be sent, as the Twitter response is piped directly through the connection immediately after it exits my analysis function, in ~1 second intervals, keeping the connection extremely active (but not overloaded). See my original post for where this happens:

// Once returned, send data to client through socket connection.
var result = doSomeAnalysis(JSON.parse(responseString));

Probably, it's that there's no 'pong' being sent back to the server, such that, even though the websocket connection keeps its data flying out, it has no idea if the receiving end of it is still active. But -- why don't I need the same measure, for my streaming activity? It works through the same channel, receives no data back from the client (no 'pong') once the streaming starts, and will continue in perpetuity (until Google geocoding shuts me down, that is :) ).

I guess that, by inference, there must be some keep alive ponging present by default in a stream, and that I need to manually add it to the GET request because I am, essentially, rigging the batch response to stream out, small chunk by small chunk, as the returns exit my analysis function.

Cool. Anyways, I thought I'd leave this (too lengthy) post in case anyone is getting hung up (pun!) on using a GET request with big data chunks, then faux-streaming them through your web socket connection. The combo does work very nicely, with this issue resolved.