python - Celery/RabbitMQ unacked messages blocking queue? -


i have invoked task fetches information remotely urllib2 few thousand times. tasks scheduled random eta (within week) don't hit server @ same time. 404, not. handling error in case happens.

in rabbitmq console can see 16 unacknowledged messages: 16 unacknowledged messages

i stopped celery, purged queue , restarted it. 16 unacknowledged messages still there.

i have other tasks go same queue , none of them executed either. after purging, tried submit task , it's state remains ready:

enter image description here

any ideas how can find out why messages remain unacknowledged?

versions:

celery==3.1.4 {rabbit,"rabbitmq","3.5.3"} 

celeryapp.py

celerybeat_schedule = {     'social_grabber': {         'task': '<django app>.tasks.task_social_grabber',         'schedule': crontab(hour=5, minute=0, day_of_week='sunday'),     }, } 

tasks.py

@app.task def task_social_grabber():     user in users:         eta = randint(0, 60 * 60 * 24 * 7) #week in seconds         task_social_grabber_single.apply_async((user), countdown=eta) 

there no routing task defined goes default queue: celery. there 1 worker processing queue.

supervisord.conf:

[program:celery] autostart = true autorestart = true command = celery worker -a <django app>.celeryapp:app --concurrency=3 -l info -n celery 

rabbitmq broke qos settings in version 3.3. need upgrade celery @ least 3.1.11 (changelog) , kombu @ least 3.0.15 (changelog). should use latest versions.

i hit exact same behavior when 3.3 released. rabbitmq flipped default behavior of prefetch_count flag. before this, if consumer reached celeryd_prefetch_multiplier limit in eta'd messages, worker limit in order fetch more messages. change broke behavior, new default behavior denied capability.


Comments

Popular posts from this blog

c# - Validate object ID from GET to POST -

node.js - Custom Model Validator SailsJS -

php - Find a regex to take part of Email -