rmharrison rmharrison - 5 months ago 27
Node.js Question

AWS elastic beanstalk deploy fails with ENOMEM error

Your AWS Elastic Beanstalk deployment fails:
- Intermittent
- For no real apparent reason

Step 1: Check obvious log

/var/log/eb-activity.log

Running npm install: /opt/elasticbeanstalk/node-install/node-v6.10.0-linux-x64/bin/npm
Setting npm config jobs to 1
npm config jobs set to 1
Running npm with --production flag
Failed to run npm install. Snapshot logs for more details.
Traceback (most recent call last):
File "/opt/elasticbeanstalk/containerfiles/ebnode.py", line 695, in <module>
main()
File "/opt/elasticbeanstalk/containerfiles/ebnode.py", line 677, in main
node_version_manager.run_npm_install(options.app_path)
File "/opt/elasticbeanstalk/containerfiles/ebnode.py", line 136, in run_npm_install
self.npm_install(bin_path, self.config_manager.get_container_config('app_staging_dir'))
File "/opt/elasticbeanstalk/containerfiles/ebnode.py", line 180, in npm_install
raise e
subprocess.CalledProcessError: Command '['/opt/elasticbeanstalk/node-install/node-v6.10.0-linux-x64/bin/npm', '--production', 'install']' returned non-zero exit status 1 (ElasticBeanstalk::ExternalInvocationError)
caused by: + /opt/elasticbeanstalk/containerfiles/ebnode.py --action npm-install


Step 2: Google for appropriate Snapshot log file...

/var/log/nodejs/npm-debug.log

58089 verbose stack Error: spawn ENOMEM
58089 verbose stack at exports._errnoException (util.js:1022:11)
58089 verbose stack at ChildProcess.spawn (internal/child_process.js:313:11)
58089 verbose stack at exports.spawn (child_process.js:380:9)
58089 verbose stack at spawn (/opt/elasticbeanstalk/node-install/node-v6.10.0-linux-x64/lib/node_modules/npm/lib/utils/spawn.js:21:13)
58089 verbose stack at runCmd_ (/opt/elasticbeanstalk/node-install/node-v6.10.0-linux-x64/lib/node_modules/npm/lib/utils/lifecycle.js:247:14)
58089 verbose stack at /opt/elasticbeanstalk/node-install/node-v6.10.0-linux-x64/lib/node_modules/npm/lib/utils/lifecycle.js:211:7
58089 verbose stack at _combinedTickCallback (internal/process/next_tick.js:67:7)
58089 verbose stack at process._tickCallback (internal/process/next_tick.js:98:9)
58090 verbose cwd /tmp/deployment/application
58091 error Linux 4.4.44-39.55.amzn1.x86_64
58092 error argv "/opt/elasticbeanstalk/node-install/node-v6.10.0-linux-x64/bin/node" "/opt/elasticbeanstalk/node-install/node-v6.10.0-linux-x64/bin/npm" "--production" "install"
58093 error node v6.10.0
58094 error npm v3.10.10
58095 error code ENOMEM
58096 error errno ENOMEM
58097 error syscall spawn
58098 error spawn ENOMEM


Step 3: Obvious options...


  • Use a bigger instance and it works...

  • Don't fix, just try again


    • Deploy again and it works...

    • Clone the environment and it works...

    • Rebuild the environment and it works....


  • Are left feeling dirty and wrong


Answer Source

TL;DR

Your instances (t2.micro in my case) are running out of memory because the instance spin-up is parallelised.

Hack resolution: Provision SWAP space on instance and retry

For one-off, while logged into instance...

sudo /bin/dd if=/dev/zero of=/var/swap.1 bs=1M count=1024
sudo /sbin/mkswap /var/swap.1
sudo chmod 600 /var/swap.1
sudo /sbin/swapon /var/swap.1

From / more detail: How do you add swap to an EC2 instance?

During deployment we use a bit of SWAP, but no crash

Mem:   1019116k total,   840880k used,   178236k free,    15064k buffers
Swap:  1048572k total,    12540k used,  1036032k free,    62440k cached

Actual resolutions

Bigger instances

  • While storage can be scaled via EBS, instances come with fixed CPU and RAM, AWS source.
  • Cost money, and these are just dev instances where mem is only a problem during spin-up

Automate provisioning of swap in ElasticBeanStalk

  • Probably .ebextensions/
  • Open Question: Cloud formation-style or a hook on deploy / restart?

Hop on the 'server-less' bandwagon

  • The promise of API Gateway + Lambda + Friends is that we shouldn't have to deal with this ish.
  • Are you 'tall enough' for cloud-native microservices? Are they even appropriate to your problem, when something staid/unfashionable like SOA would suffice.
  • Once going cloud-first, reverting to on-prem would be difficult, which is a requirement for some.

Use less bloated packages

  • Sometimes you're stuck with legacy
  • Can be caused by necessary transitive- or sub-dependencies. Where does it end...decomposing other people's libraries?

Explanation

A quick google reveals that ENOMEM is an out of memory error. t2.micro instances only have 1 GB of RAM.

Rarely would we use this amount on dev; however, ElasticBeanstalk parallelizes parts of the build process through spawned workers. This means that during SETUP, for the larger packages, one may run out of memory and the operation will fail.

Using free -m we can see...

Start (plenty of free memory)

             total       used       free     shared    buffers     cached
Mem:       1019116     609672     409444        144      45448     240064
-/+ buffers/cache:     324160     694956
Swap:            0          0          0

Ran out of memory at next tick)

Mem:       1019116     947232      71884        144      11544      81280
-/+ buffers/cache:     854408     164708
Swap:            0          0          0

Deploy process aborted

             total       used       free     shared    buffers     cached
Mem:       1019116     411892     607224        144      13000      95460
-/+ buffers/cache:     303432     715684
Swap:            0          0          0