-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Gem is not working in Rails 4.1.2..? #37
Comments
I'd accept a pull-request on it but probably will not be able to this problem myself for a while. @kzk You seen this? |
@joshidhruv I'm running with rails 4.1.4 and ruby 2.1.2, without any problems, did you load the gem after the unicorn gem? |
@jvanbaarsen Yes I did. Also I used heroku api to do that. But my problem is I am getting memory exceed error in heroku. And if I shutdown my worker then my background job with sidekiq is also ends. I have memory leak in app. and its so hard to find out. Thanks for the help .. |
@joshidhruv Ah ok, so the problem is not that you get an |
I am sorry I was not clear. I had the NoMethodError and I am not sure why. I stop using it because I didnt found any other solution. Now for my app-- I have 2 (1x) dynos so its 512 mb + 1 (2x) worker + 1(small) Redis instance. I have 3 worker process in (config/unicorn.rb) and 1 concurrency in (config/sidekiq.rb). I am not sure if you know much about sidekiq but its background job scheduler/worker. |
Ok, you have an idea of how much memory a single process is using? Because when a single process is using more then 170 Mb, your dyno will run out of memory, did you try to lower the unicorn process? |
When I run it local Activity monitor shows me ~600 MB of RAM usage. and I am not sure what you mean by basic footprint. But in short my app is webscrapper and it willl find the word in like 10000 websites. Now I am not using dyno to do my work. I have bought worker which will do this job. I dont see dyno having memory issue in logs. But worker is complaining. I have 1GB ram for worker and it will go out of bound. on Heroku documentation they suggest to have more than 2 worker process and i did try with 2 but didnt help. |
@joshidhruv Hm ok, afraid i cant help you :( |
no worries I am trying to move to delay job and see whats the deal. Thank you @jvanbaarsen for taking part. |
@joshidhruv Can you paste the exact exception you've got? It's hard to 'guess' your situation for us. |
Hey @kzk this is my /config.ru file look like. require 'unicorn/worker_killer' Max memory size (RSS) per workeruse Unicorn::WorkerKiller::Oom, oom_min, oom_max Its set up on heroku and I have only 2 dynos and 1 worker. In unicorn.rb file I have mention this worker_processes 3 And since I update the gem file it does not show me any error but its not working. I can see my worker memory going to 1 gb. |
I had the same problem with Ruby 2.1.2 and to make it work I had to add a require 'unicorn' before the require 'unicorn/worker_killer'. Here is my config:
require ::File.expand_path('../config/environment', FILE) |
@joshidhruv How were you able to see what your unicorn worker was consuming? I am not on Heroku, so SSH'ing into the instance and doing |
same here, will try your fix @mool |
Is this still an issue or can we close this issue? |
I am running rails 4.1.2 and ruby 2.1.2. I see no oom method error. I can submit more details if you confirm if it working. Thanks.
The text was updated successfully, but these errors were encountered: