Gabriel dos Anjos Gabriel dos Anjos - 1 month ago 12x
Javascript Question

Large blob file in Javascript

I have an XHR object that downloads 1GB file.

function getFile(callback)
var xhr = new XMLHttpRequest();
xhr.onload = function () {
if (xhr.status == 200) {
console.log("Request error: " + xhr.statusText);
};'GET', 'download', true);
xhr.onprogress = updateProgress;
xhr.responseType = "arraybuffer";

But the File API can't load all that into memory even from a worker
it throws out of memory...

btn.addEventListener('click', function() {
getFile(function() {
var worker = new Worker("js/saving.worker.js");
worker.onmessage = function(e) {
saveAs(; // FileSaver.js it creates URL from blob... but its too large


Web Worker

onmessage = function (e) {
var view = new DataView(, 0);
var file = new File([view], '', {type: "application/zip"});

I'm not trying to compress the file, this file is already compressed from server.

I thought storing it first on indexedDB but i i'll have to load blob or file anyway, even if i do request by range bytes, soon or late i will have to build this giant blob..

I want to create blob: url and send it to user after been downloaded by browser

I'll use FileSystem API for Google Chrome, but i want make something for firefox, i looked into File Handle Api but nothing...

Do i have to build an extension for firefox, in order to do the same thing as FileSystem do for google chrome?

Ubuntu 32 bits


Loading 1gb+ with ajax isn't convenient just for monitoring download progress and filling up the memory.

Instead I would just send the file with a Content-Disposition header to save the file.

There are however ways to go around it to monitor the progress. Option one is to have a second websocket that signals how much you have downloaded while you are downloading normally with a get request. the other option will be described later in the bottom

I know you talked about using Blinks sandboxed filesystem in the conversation. but it has some drawbacks. It may need permission if using persistent storage. It only allows 20% of the available disk that are left. And if chrome needs to free some space then it will throw away any others domains temporary storage that was last used for the most recent file. Not to mention that it has been dropping support for it and may never end up in other browsers - but they will most likely not remove it since many sites still depend on it

The only way to process this large file is with streams. That is why I have created a StreamSaver. This is only going to work in Blink (chrome & opera) ATM but it will eventually be supported by other browsers with the whatwg spec to back it up as a standard.

fetch(url).then(res => {
    // One idea is to get the filename from Content-Disposition header...
    const fileStream = streamSaver.createWriteStream('')
    const writeStream = streamSaver.getWriter()
    // Later you will be able to just simply do
    // res.body.pipeTo(fileStream)
    // instead of pumping

    const reader = res.body.getReader()
    const pump = () =>
        .then(({ value, done }) => done
            // close the stream so we stop writing
            ? writeStream.close()
            // Write one chunk, then get the next one
            : writeStream.write(value).then(pump)

    // Start the reader
    pump().then(() =>
        console.log('Closed the stream, Done writing')

This will not take up any memory