-
Notifications
You must be signed in to change notification settings - Fork 9
error codes
Whenever your app experiences an error, Heroku will return a standard error page with the HTTP status code 503. To help you debug the underlying error, however, the platform will also add custom error information to your logs. Each type of error gets its own error code, with all HTTP errors starting with the letter H and all runtime errors starting with R. Logging errors start with L.
アプリケーションでエラーが発生した場合、いつでも、Herokuは503のHTTPエラーページを表示させます。しかしながら、裏に潜むエラーをデバッグするのに、Herokuは、ログへカスタムエラーの情報を追記します。エラー毎にエラーコードが存在し、HTTPのエラーはH、ランタイムのエラーはR、ログ取得に関するエラーはLから始まります。
A crashed web process or a boot timeout on the web process will present this error. クラッシュしたwebプロセス、またはwebプロセスの起動時タイムアウトは、このエラーを返します。
2010-10-06T21:51:04-07:00 heroku[web.1]: State changed from down to starting
2010-10-06T21:51:07-07:00 app[web.1]: Starting process with command: `thin -p 22020 -e production -R /home/heroku_rack/heroku.ru start`
2010-10-06T21:51:09-07:00 app[web.1]: >> Using rails adapter
2010-10-06T21:51:09-07:00 app[web.1]: Missing the Rails 2.3.5 gem. Please `gem install -v=2.3.5 rails`, update your RAILS_GEM_VERSION setting in config/environment.rb for the Rails version you do have installed, or comment out RAILS_GEM_VERSION to use the latest version installed.
2010-10-06T21:51:10-07:00 heroku[web.1]: Process exited
2010-10-06T21:51:12-07:00 heroku[router]: Error H10 (App crashed) -> GET myapp.heroku.com/ web=web.1 queue=0 wait=0ms service=0ms bytes=0
2010-10-06T21:51:12-07:00 heroku[nginx]: GET / HTTP/1.1 | 75.36.147.245 | 8132 | http | 503
There are too many HTTP requests in your backlog. Increasing your dynos is the usual solution. 未処理のHTTPリクエストがたくさんあることを示しています。dynoを増やすことが通常の解決方法となります。
2010-10-06T21:51:07-07:00 heroku[router]: Error H11 (Backlog too deep) -> GET myapp.heroku.com/ web=web.1 queue=51 wait=0ms service=0ms bytes=0
2010-10-06T21:51:07-07:00 heroku[nginx]: GET / HTTP/1.1 | 75.36.147.245 | 8132 | http | 503
An HTTP request took longer than 30 seconds to complete. In the example below, a Rails app takes 37 seconds to render the page; the HTTP router returns a 503 prior to Rails completing its request cycle, but the Rails process continues and the completion message shows after the router message. HTTPリクエストが、完了するのに30秒以上掛かっています。下記の例では、Railsのアプリケーションがページをレンダーするのに37秒要しています。Railsがリクエストのサイクルを完了させるより前に、HTTPルーターが503のエラーを返しています。ですが、Railsのプロセスは流れ続けているので、ルーターがエラーメッセージを返した後で、完了のメッセージが表示されています。
2010-10-06T21:51:07-07:00 app[web.2]: Processing PostController#list (for 75.36.147.245 at 2010-10-06 21:51:07) [GET]
2010-10-06T21:51:08-07:00 app[web.2]: Rendering template within layouts/application
2010-10-06T21:51:19-07:00 app[web.2]: Rendering post/list
2010-10-06T21:51:37-07:00 heroku[router]: Error H12 (Request timeout) -> GET myapp.heroku.com/ web=web.2 queue=0 wait=0ms service=0ms bytes=0
2010-10-06T21:51:37-07:00 heroku[nginx]: GET / HTTP/1.1 | 75.36.147.245 | 8132 | http | 503
2010-10-06T21:51:42-07:00 app[web.2]: Completed in 37400ms (View: 27, DB: 21) | 200 OK [http://myapp.heroku.com/]
This is an edge case that appears far more rarely than the other errors. There are certain malfunctioning binary gems and other cases where your web process accepts the connection, but then closes the socket without writing anything to it.
このエラーは、他のエラーよりも発生する機会がレアなエッジケースです。Gemに何かしらの機能不全があるケースと、webプロセスがコネクションを受け入れるものの、何も書き込みを行わずにソケットをクローズさせるような他のケースが存在します。
2010-10-06T21:51:37-07:00 heroku[router]: Error H13 (Connection closed without response) -> GET myapp.heroku.com/ web=web.2 queue=0 wait=0ms service=173ms bytes=0
2010-10-06T21:51:37-07:00 heroku[nginx]: GET / HTTP/1.1 | 75.36.147.245 | 8132 | http | 503
This is most likely the result of scaling your web processes down to zero dynos. To fix it, scale your web process to 1 or more dynos:
$ heroku ps:scale web=1
Use the heroku ps
command to determine the state of your web processes.
The idle connection error is logged when a request is terminated due to 55 seconds of inactivity.
Apps on Cedar's HTTP routing stack use the herokuapp.com domain. Requests made to a Cedar app at its deprecated heroku.com domain will be redirected to the correct herokuapp.com address and this redirect message will be inserted into the app's logs.
This error message is logged when the routing mesh detects a malformed HTTP response coming from a dyno.
Either the client socket or backend (your app's web process) socket was closed before the backend returned an HTTP response. The sock
field in the log has the value client
or backend
depending on which socket was closed.
The routing mesh received a connection timeout error after 5 seconds attempting to open a socket to your web process. This is usually a symptom of your app being overwhelmed and failing to accept new connections in a timely manner. If you have multiple dynos, the routing mesh will retry multiple dynos before logging H19 and serving a standard error page.
2010-10-06T21:51:07-07:00 heroku[router]: Error H19 (Backend connection timeout) -> GET myapp.heroku.com/ dyno=web.1 queue= wait= service= status=503 bytes=
The router will enqueue requests for 75 seconds while waiting for starting processes to reach an "up" state. If after 75 seconds, no web processes have reached an "up" state, the router logs H20 and serves a standard error page.
2010-10-06T21:51:07-07:00 heroku[router]: Error H20 (App boot timeout) -> GET myapp.heroku.com/ dyno= queue= wait=75000ms service= status=503 bytes=
This is not an error, but we give it a code for the sake of completeness. Note the log formatting is the same but without the word "Error".
2010-10-06T21:51:07-07:00 heroku[router]: H80 (Maintenance mode) -> GET myapp.heroku.com/ web=none queue=0 wait=0ms service=0ms bytes=0
2010-10-06T21:51:07-07:00 heroku[nginx]: GET / HTTP/1.1 | 75.36.147.245 | 8132 | http | 503
This indicates an internal error in the Heroku platform. Unlike all of the other errors which will require action from you to correct, this one does not require action from you. Try again in a minute, or check the status site.
2010-10-06T21:51:07-07:00 heroku[router]: Error H99 (Platform error) -> GET myapp.heroku.com/ web=none queue=0 wait=0ms service=0ms bytes=0
2010-10-06T21:51:07-07:00 heroku[nginx]: GET / HTTP/1.1 | 75.36.147.245 | 8132 | http | 503
A web process took longer than 60 seconds to bind to its assigned $PORT
. This error is often caused by a process being unable to reach an external resource, such as a database.
2011-05-03T17:31:38+00:00 heroku[web.1]: State changed from created to starting
2011-05-03T17:31:40+00:00 heroku[web.1]: Starting process with command: `thin -p 22020 -e production -R /home/heroku_rack/heroku.ru start`
2011-05-03T17:32:40+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
2011-05-03T17:32:40+00:00 heroku[web.1]: Stopping process with SIGKILL
2011-05-03T17:32:40+00:00 heroku[web.1]: Process exited
2011-05-03T17:32:41+00:00 heroku[web.1]: State changed from starting to crashed
A process performed an incorrect TCP socket bind. This error is often caused by binding to a port other than the assigned $PORT
. It can also be caused by binding to an interface other than 0.0.0.0
or *
.
2011-05-03T17:38:16+00:00 heroku[web.1]: Starting process with command: `bundle exec ruby web.rb -e production`
2011-05-03T17:38:18+00:00 app[web.1]: == Sinatra/1.2.3 has taken the stage on 4567 for production with backup from Thin
2011-05-03T17:38:18+00:00 app[web.1]: >> Thin web server
2011-05-03T17:38:18+00:00 app[web.1]: >> Maximum connections set to 1024
2011-05-03T17:38:18+00:00 app[web.1]: >> Listening on 0.0.0.0:4567, CTRL+C to stop
2011-05-03T17:38:18+00:00 heroku[web.1]: Error R11 (Bad bind) -> Process bound to port 4567, should be 43411 (see environment variable PORT)
2011-05-03T17:38:18+00:00 heroku[web.1]: Stopping process with SIGKILL
2011-05-03T17:38:18+00:00 heroku[web.1]: Process exited
2011-05-03T17:38:20+00:00 heroku[web.1]: State changed from starting to crashed
A process failed to exit within 10 seconds of being sent a SIGTERM indicating that it should stop. The process is sent SIGKILL to force an exit.
2011-05-03T17:40:10+00:00 app[worker.1]: Working
2011-05-03T17:40:11+00:00 heroku[worker.1]: Stopping process with SIGTERM
2011-05-03T17:40:11+00:00 app[worker.1]: Ignoring SIGTERM
2011-05-03T17:40:14+00:00 app[worker.1]: Working
2011-05-03T17:40:18+00:00 app[worker.1]: Working
2011-05-03T17:40:21+00:00 heroku[worker.1]: Error R12 (Exit timeout) -> Process failed to exit within 10 seconds of SIGTERM
2011-05-03T17:40:21+00:00 heroku[worker.1]: Stopping process with SIGKILL
2011-05-03T17:40:21+00:00 heroku[worker.1]: Process exited
A process started with heroku run
failed to attach to the invoking client.
2011-06-29T02:13:29+00:00 app[run.3]: Awaiting client
2011-06-29T02:13:30+00:00 heroku[run.3]: State changed from starting to up
2011-06-29T02:13:59+00:00 app[run.3]: Error R13 (Attach error) -> Failed to attach to process
2011-06-29T02:13:59+00:00 heroku[run.3]: Process exited
A process requires memory in excess of its 512MB quota. If this error occurs, the process will page to swap space to continue running, which may cause degraded process performance.
2011-05-03T17:40:10+00:00 app[worker.1]: Working
2011-05-03T17:40:10+00:00 heroku[worker.1]: Process running mem=528MB(103.3%)
2011-05-03T17:40:11+00:00 heroku[worker.1]: Error R14 (Memory quota exceeded)
2011-05-03T17:41:52+00:00 app[worker.1]: Working
A processes requires vastly more memory than its 512MB quota and is consuming excessive swap space. If this error occurs, the process will be killed by the platform.
2011-05-03T17:40:10+00:00 app[worker.1]: Working
2011-05-03T17:40:10+00:00 heroku[worker.1]: Process running mem=2565MB(501.0%)
2011-05-03T17:40:11+00:00 heroku[worker.1]: Error R15 (Memory quota vastly exceeded)
2011-05-03T17:40:11+00:00 heroku[worker.1]: Stopping process with SIGKILL
2011-05-03T17:40:12+00:00 heroku[worker.1]: Process exited
An attached process is continuing to run after being sent SIGHUP
when its external connection was closed. This is usually a mistake, though some apps might want to do this intentionally.
2011-05-03T17:32:03+00:00 heroku[run.1]: Awaiting client
2011-05-03T17:32:03+00:00 heroku[run.1]: Starting process with command `bash`
2011-05-03T17:40:11+00:00 heroku[run.1]: Client connection closed. Sending SIGHUP to all processes
2011-05-03T17:40:16+00:00 heroku[run.1]: Client connection closed. Sending SIGHUP to all processes
2011-05-03T17:40:21+00:00 heroku[run.1]: Client connection closed. Sending SIGHUP to all processes
2011-05-03T17:40:26+00:00 heroku[run.1]: Error R16 (Detached) -> An attached process is not responding to SIGHUP after its external connection was closed.
A Logplex drain could not send logs to its destination fast enough to keep up with the volume of logs generated by the application and Logplex has been forced to discard some messages. You will need to investigate reducing the volume of logs generated by your application (e.g. condense multiple logs lines into a smaller, single-line entry) or use a different drain destination that can keep up with the log volume you are producing.
2011-05-03T17:40:10+00:00 heroku[logplex]: L10 (Drain buffer overflow) -> This drain dropped 1101 messages since 2011-05-03T17:35:00+00:00
A heroku logs --tail session cannot keep up with the volume of logs generated by the application or log channel, and Logplex has discarded some log lines necessary to catch up. To avoid this error you will need run the command on a faster internet connection (increase the rate at which you can receive logs) or you will need to modify your application to reduce the logging volume (decrease the rate at which logs are generated).
2011-05-03T17:40:10+00:00 heroku[logplex]: L11 (Tail buffer overflow) -> This tail session dropped 1101 messages since 2011-05-03T17:35:00+00:00