You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Traditionally, it takes, at least, a minute or two for me to see actual output, whether I'm watching a stream of Fargate ECS task output or just downloading all of it. However, I noticed that I provisioned some ECS containers that had five times as much memory and twice as much CPU allocation and I get those logs/streams instantly when I call awslogs. It doesn't make sense, as there really is no way to differently configure how the logs are captured from the tasks, the regions are the same, it's all through Cloudwatch in general, this affects both running tasks and finished tasks, and those high-resources tasks have massively more logging than the others (you might expect the behavior to be reversed). Are there any observations I might be missing? Is there any wisdom that might be useful to reconcile this behavior?
The text was updated successfully, but these errors were encountered:
I'm not referring to the initial ECS startup. I mean mid-stream, where the log is hot and has been running for a while. Obviously, I can look at a running log instantly directly in CW and I can receive them instantly. I can also use "aws logs tail" and pull them up instantly. For some reason, I blocked on awslogs for maybe three minutes to start showing the exact same data.
The commands, for comparison:
aws logs tail /ecs/extraction/scheduled --log-stream-names ecs/extractor-container/ac470d3b1a554077953f5ad121458fc1 --format short --follow
Traditionally, it takes, at least, a minute or two for me to see actual output, whether I'm watching a stream of Fargate ECS task output or just downloading all of it. However, I noticed that I provisioned some ECS containers that had five times as much memory and twice as much CPU allocation and I get those logs/streams instantly when I call awslogs. It doesn't make sense, as there really is no way to differently configure how the logs are captured from the tasks, the regions are the same, it's all through Cloudwatch in general, this affects both running tasks and finished tasks, and those high-resources tasks have massively more logging than the others (you might expect the behavior to be reversed). Are there any observations I might be missing? Is there any wisdom that might be useful to reconcile this behavior?
The text was updated successfully, but these errors were encountered: