max max - 2 months ago 41
Python Question

Celery Daemon does not work on centos7

I am trying to run celery daemon on centos7 which has systemctl.
It is not working.
- I tried a non-daemon case and it worked
- I ran ~mytask and that freezes up on the client machine and on the server where the celery daemon is running I get absolutely nothing logged.
- I have noticed that actually no celery processes are running.

Any suggestions as how to fix this?

Here is my daemon default configuration:

CELERYD_NODES="localhost.localdomain"
CELERY_BIN="/tmp/myapp/venv/bin/celery"
CELERY_APP="pipeline"
CELERYD_OPTS="--broker=amqp://192.168.168.111/"
CELERYD_LOG_LEVEL="INFO"
CELERYD_CHDIR="/tmp/myapp"
CELERYD_USER="root"


Note: I am starting the daemon with

sudo /etc/init.d/celeryd start


and I got my celery daemon script from:
https://raw.githubusercontent.com/celery/celery/3.1/extra/generic-init.d/celeryd

I also tried the one from:
https://raw.githubusercontent.com/celery/celery/3.1/extra/generic-init.d/celeryd
but this one showed me an error when trying to start the daemon:

systemd[1]: Starting LSB: celery task worker daemon...
celeryd[19924]: basename: missing operand
celeryd[19924]: Try 'basename --help' for more information.
celeryd[19924]: Starting : /etc/rc.d/init.d/celeryd: line 193: multi: command not found
celeryd[19924]: [FAILED]
systemd[1]: celeryd.service: control process exited, code=exited status=1
systemd[1]: Failed to start LSB: celery task worker daemon.
systemd[1]: Unit celeryd.service entered failed state.

Answer

celeryd is depricated. If you are able to run in a non-daemon mode say

celery worker -l info -A my_app -n my_worker

You can simply daemonize it by using celery multi

celery multi my_worker -A my_app -l info

That being said, if you still want to use celeryd try these steps.

Comments